Jan 27 14:29:01 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 14:29:01 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:01 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:02 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:03 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:29:04 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 14:29:04 crc kubenswrapper[4698]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.765693 4698 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777312 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777352 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777357 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777361 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777365 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777370 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777376 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777381 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777385 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777389 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777395 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777401 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777406 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777410 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777415 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777420 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777425 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777430 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777436 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777442 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777446 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777451 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777455 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777459 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777463 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777468 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777473 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777479 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777483 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777493 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777497 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777501 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777505 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777510 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777514 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777518 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777523 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777527 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777533 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777538 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777543 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777548 4698 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777552 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777556 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777560 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777564 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777568 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777572 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777576 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777581 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777587 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777592 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777597 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777601 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777604 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777608 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777612 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777616 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777620 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777624 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777628 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777632 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777661 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777668 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777672 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777676 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777681 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777685 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777689 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777694 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.777698 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779290 4698 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779349 4698 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779364 4698 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779372 4698 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779381 4698 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779387 4698 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779396 4698 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779404 4698 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779410 4698 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779415 4698 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779422 4698 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779438 4698 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779444 4698 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779450 4698 flags.go:64] FLAG: --cgroup-root="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779456 4698 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779461 4698 flags.go:64] FLAG: --client-ca-file="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779466 4698 flags.go:64] FLAG: --cloud-config="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779472 4698 flags.go:64] FLAG: --cloud-provider="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779477 4698 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779485 4698 flags.go:64] FLAG: --cluster-domain="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779490 4698 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779496 4698 flags.go:64] FLAG: --config-dir="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779501 4698 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779507 4698 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779527 4698 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779533 4698 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779539 4698 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779544 4698 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779551 4698 flags.go:64] FLAG: --contention-profiling="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779557 4698 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779562 4698 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779568 4698 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779574 4698 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779586 4698 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779592 4698 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779597 4698 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779602 4698 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779608 4698 flags.go:64] FLAG: --enable-server="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779613 4698 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779621 4698 flags.go:64] FLAG: --event-burst="100" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779627 4698 flags.go:64] FLAG: --event-qps="50" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779632 4698 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779663 4698 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779668 4698 flags.go:64] FLAG: --eviction-hard="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779675 4698 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779681 4698 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779687 4698 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779694 4698 flags.go:64] FLAG: --eviction-soft="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779699 4698 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779704 4698 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779710 4698 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779715 4698 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779720 4698 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779725 4698 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779731 4698 flags.go:64] FLAG: --feature-gates="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779738 4698 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779743 4698 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779748 4698 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779755 4698 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779761 4698 flags.go:64] FLAG: --healthz-port="10248" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779766 4698 flags.go:64] FLAG: --help="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779772 4698 flags.go:64] FLAG: --hostname-override="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779777 4698 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779783 4698 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779789 4698 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779795 4698 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779800 4698 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779805 4698 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779810 4698 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779815 4698 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779820 4698 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779825 4698 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779831 4698 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779836 4698 flags.go:64] FLAG: --kube-reserved="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779841 4698 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779846 4698 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779852 4698 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779857 4698 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779862 4698 flags.go:64] FLAG: --lock-file="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779867 4698 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779872 4698 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779876 4698 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779894 4698 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779899 4698 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779903 4698 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779907 4698 flags.go:64] FLAG: --logging-format="text" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779911 4698 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779916 4698 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779921 4698 flags.go:64] FLAG: --manifest-url="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779925 4698 flags.go:64] FLAG: --manifest-url-header="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779932 4698 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779937 4698 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779943 4698 flags.go:64] FLAG: --max-pods="110" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779947 4698 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779951 4698 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779956 4698 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779960 4698 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779965 4698 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779969 4698 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779974 4698 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779989 4698 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779993 4698 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.779998 4698 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780002 4698 flags.go:64] FLAG: --pod-cidr="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780006 4698 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780015 4698 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780020 4698 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780024 4698 flags.go:64] FLAG: --pods-per-core="0" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780028 4698 flags.go:64] FLAG: --port="10250" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780032 4698 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780036 4698 flags.go:64] FLAG: --provider-id="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780041 4698 flags.go:64] FLAG: --qos-reserved="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780045 4698 flags.go:64] FLAG: --read-only-port="10255" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780049 4698 flags.go:64] FLAG: --register-node="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780053 4698 flags.go:64] FLAG: --register-schedulable="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780057 4698 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780066 4698 flags.go:64] FLAG: --registry-burst="10" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780070 4698 flags.go:64] FLAG: --registry-qps="5" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780074 4698 flags.go:64] FLAG: --reserved-cpus="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780079 4698 flags.go:64] FLAG: --reserved-memory="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780084 4698 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780089 4698 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780093 4698 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780127 4698 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780133 4698 flags.go:64] FLAG: --runonce="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780138 4698 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780144 4698 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780149 4698 flags.go:64] FLAG: --seccomp-default="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780153 4698 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780159 4698 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780163 4698 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780168 4698 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780172 4698 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780177 4698 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780182 4698 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780186 4698 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780190 4698 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780195 4698 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780199 4698 flags.go:64] FLAG: --system-cgroups="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780203 4698 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780210 4698 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780214 4698 flags.go:64] FLAG: --tls-cert-file="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780218 4698 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780225 4698 flags.go:64] FLAG: --tls-min-version="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780229 4698 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780233 4698 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780237 4698 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780241 4698 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780246 4698 flags.go:64] FLAG: --v="2" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780253 4698 flags.go:64] FLAG: --version="false" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780259 4698 flags.go:64] FLAG: --vmodule="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780265 4698 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780275 4698 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780411 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780418 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780425 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780430 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780435 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780441 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780445 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780450 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780456 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780461 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780466 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780473 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780479 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780484 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780489 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780494 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780498 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780502 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780505 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780509 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780513 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780516 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780520 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780524 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780529 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780532 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780536 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780540 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780544 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780548 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780551 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780556 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780560 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780564 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780569 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780572 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780577 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780580 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780585 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780589 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780593 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780596 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780600 4698 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780604 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780608 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780611 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780614 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780618 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780621 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780625 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780628 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780632 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780652 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780656 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780659 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780663 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780666 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780670 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780673 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780677 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780680 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780684 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780688 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780692 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780696 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780700 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780704 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780709 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780712 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780716 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.780719 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.780726 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.791355 4698 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.791403 4698 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791470 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791478 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791482 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791488 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791495 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791499 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791503 4698 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791507 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791511 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791515 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791519 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791523 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791527 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791530 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791534 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791537 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791541 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791545 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791548 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791552 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791555 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791559 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791563 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791568 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791572 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791576 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791580 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791584 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791589 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791594 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791598 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791601 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791606 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791609 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791614 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791618 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791622 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791626 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791630 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791656 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791660 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791663 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791667 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791671 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791675 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791679 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791684 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791688 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791692 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791696 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791700 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791706 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791710 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791714 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791717 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791721 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791724 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791728 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791731 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791735 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791738 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791742 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791745 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791749 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791752 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791756 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791759 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791763 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791766 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791770 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791774 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.791781 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791886 4698 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791893 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791898 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791902 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791905 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791910 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791914 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791918 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791921 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791925 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791928 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791932 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791936 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791940 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791943 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791947 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791950 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791954 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791957 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791961 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791965 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791968 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791972 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791975 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791979 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791982 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791986 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791990 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791994 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.791999 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792002 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792006 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792010 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792013 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792018 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792021 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792025 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792029 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792034 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792038 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792043 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792048 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792053 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792057 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792061 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792067 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792072 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792076 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792081 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792086 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792092 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792095 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792128 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792133 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792136 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792140 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792144 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792147 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792151 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792155 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792160 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792164 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792169 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792173 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792178 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792182 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792186 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792191 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792196 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792201 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.792207 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.792215 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.792398 4698 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.798431 4698 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.798561 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.800590 4698 server.go:997] "Starting client certificate rotation" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.800616 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.800810 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-15 03:04:23.559804417 +0000 UTC Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.800920 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.830065 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.831918 4698 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.834324 4698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.846624 4698 log.go:25] "Validated CRI v1 runtime API" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.884952 4698 log.go:25] "Validated CRI v1 image API" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.886571 4698 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.891894 4698 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-14-23-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.891934 4698 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.911498 4698 manager.go:217] Machine: {Timestamp:2026-01-27 14:29:04.907066573 +0000 UTC m=+0.583844058 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3b71cf61-a3fa-4076-a23c-5d695e40fc0d BootID:9d78a2be-22ac-47e6-a326-83038cc10e0c Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d1:5f:d0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d1:5f:d0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:96:21:65 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:95:dc:f0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:19:87 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f3:56:8b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:d1:73:65:d7:e2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:68:d0:4d:04:14 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.911773 4698 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.911942 4698 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.913407 4698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.913614 4698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.913670 4698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.913866 4698 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.913875 4698 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.914340 4698 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.914372 4698 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.915052 4698 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.915149 4698 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.920056 4698 kubelet.go:418] "Attempting to sync node with API server" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.920077 4698 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.920099 4698 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.920112 4698 kubelet.go:324] "Adding apiserver pod source" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.920140 4698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.925508 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.925567 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.925669 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.925599 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.927444 4698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.928539 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.929805 4698 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931293 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931315 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931322 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931329 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931339 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931345 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931352 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931361 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931370 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931377 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931392 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.931398 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.932777 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.933555 4698 server.go:1280] "Started kubelet" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.934478 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.934882 4698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 14:29:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.934904 4698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.938172 4698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.938243 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.938297 4698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.938868 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.938966 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:17:54.142140448 +0000 UTC Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.939561 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.940802 4698 factory.go:55] Registering systemd factory Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.940848 4698 factory.go:221] Registration of the systemd container factory successfully Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.940859 4698 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.940874 4698 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.941004 4698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.941651 4698 factory.go:153] Registering CRI-O factory Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.941733 4698 factory.go:221] Registration of the crio container factory successfully Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.941862 4698 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.941958 4698 factory.go:103] Registering Raw factory Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.941939 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.942089 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.942023 4698 manager.go:1196] Started watching for new ooms in manager Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.942869 4698 manager.go:319] Starting recovery of all containers Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.942910 4698 server.go:460] "Adding debug handlers to kubelet server" Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.942377 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9cd5a03d4d03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:29:04.933522691 +0000 UTC m=+0.610300146,LastTimestamp:2026-01-27 14:29:04.933522691 +0000 UTC m=+0.610300146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.953989 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954263 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954282 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954295 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954309 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954321 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954341 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954353 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954370 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954381 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954400 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954413 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954426 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954439 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954451 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954464 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954497 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954509 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954522 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954537 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954549 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954561 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954571 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954584 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954595 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954606 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954622 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954651 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954665 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954676 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954690 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954702 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954715 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954756 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954768 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954778 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954789 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954803 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954816 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954829 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954840 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954851 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954866 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954878 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954890 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954901 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954913 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954925 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954937 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954948 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954961 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954972 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.954989 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955002 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955014 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955026 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955037 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955048 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955060 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955071 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955082 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955094 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955105 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955115 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955128 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955143 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955155 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955166 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955176 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955191 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955200 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955212 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955222 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955232 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955248 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955259 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955269 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955279 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955290 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955301 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955313 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955324 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955334 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955345 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955357 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955369 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955381 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955393 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955404 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955417 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955429 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955441 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955452 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955464 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955476 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955487 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955505 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955518 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955529 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955541 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955553 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955569 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955582 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955596 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955614 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955626 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955657 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955672 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955687 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955701 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955714 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955727 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955742 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955780 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955794 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955805 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955816 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955828 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955840 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955852 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955864 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955876 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955888 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955899 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955911 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955922 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955934 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955947 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955960 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955971 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955982 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.955994 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956007 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956022 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956034 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956051 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956063 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956078 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956090 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956102 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956221 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956240 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956252 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956265 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956277 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956290 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956302 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956313 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956326 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956337 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956350 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956361 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956373 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956385 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956395 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956404 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956412 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956422 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956432 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956442 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956456 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956468 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956481 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956493 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956505 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956518 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.956530 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958835 4698 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958878 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958898 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958908 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958919 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958930 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958939 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958948 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958959 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958969 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958978 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958987 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.958997 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.959006 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.959015 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.959943 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.959970 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.959988 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960003 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960019 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960036 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960051 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960066 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960081 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960111 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960172 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960183 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960195 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960205 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960215 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960225 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960236 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960245 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960256 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960266 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960276 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960286 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960295 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960306 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960316 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960328 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960339 4698 reconstruct.go:97] "Volume reconstruction finished" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960345 4698 reconciler.go:26] "Reconciler: start to sync state" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.960537 4698 manager.go:324] Recovery completed Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.972306 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.973596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.973656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.973667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.974397 4698 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.974413 4698 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.974432 4698 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.989376 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.990873 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.990909 4698 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.990929 4698 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.990969 4698 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 14:29:04 crc kubenswrapper[4698]: W0127 14:29:04.993252 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:04 crc kubenswrapper[4698]: E0127 14:29:04.993341 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.994055 4698 policy_none.go:49] "None policy: Start" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.994915 4698 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 14:29:04 crc kubenswrapper[4698]: I0127 14:29:04.994955 4698 state_mem.go:35] "Initializing new in-memory state store" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.039080 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.053836 4698 manager.go:334] "Starting Device Plugin manager" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054076 4698 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054095 4698 server.go:79] "Starting device plugin registration server" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054456 4698 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054473 4698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054670 4698 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054764 4698 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.054778 4698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.064591 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.091987 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.092201 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093524 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093878 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.093921 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094503 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094674 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.094731 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096555 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.096703 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.097987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098087 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098096 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098123 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.098928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.099058 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.099082 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.099762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.099783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.099793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.140342 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.156199 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.157338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.157370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.157381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.157424 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.158002 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163508 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163570 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163617 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163742 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163766 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163787 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.163970 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.164024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.164052 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265681 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265696 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265773 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265773 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265814 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265778 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265854 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.265998 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.266024 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.266045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.266028 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.266097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.276105 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9cd5a03d4d03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:29:04.933522691 +0000 UTC m=+0.610300146,LastTimestamp:2026-01-27 14:29:04.933522691 +0000 UTC m=+0.610300146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.358928 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.360302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.360355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.360368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.360392 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.360875 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.437167 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.456021 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.477902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.490188 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.494928 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:29:05 crc kubenswrapper[4698]: W0127 14:29:05.508605 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cdb2965a80897728b5d5dc605ed3f21ccd7da6831d0e814505f94e32d2c1a6b3 WatchSource:0}: Error finding container cdb2965a80897728b5d5dc605ed3f21ccd7da6831d0e814505f94e32d2c1a6b3: Status 404 returned error can't find the container with id cdb2965a80897728b5d5dc605ed3f21ccd7da6831d0e814505f94e32d2c1a6b3 Jan 27 14:29:05 crc kubenswrapper[4698]: W0127 14:29:05.520906 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-76da9f2f208900a91504e077d3885b9cf3d6ae80e23045e6977f41abc46f9510 WatchSource:0}: Error finding container 76da9f2f208900a91504e077d3885b9cf3d6ae80e23045e6977f41abc46f9510: Status 404 returned error can't find the container with id 76da9f2f208900a91504e077d3885b9cf3d6ae80e23045e6977f41abc46f9510 Jan 27 14:29:05 crc kubenswrapper[4698]: W0127 14:29:05.525322 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4e7da0959dfc7f060f047f79e0da04d6496d57290f03c2f04ac4e090ef2a3c9b WatchSource:0}: Error finding container 4e7da0959dfc7f060f047f79e0da04d6496d57290f03c2f04ac4e090ef2a3c9b: Status 404 returned error can't find the container with id 4e7da0959dfc7f060f047f79e0da04d6496d57290f03c2f04ac4e090ef2a3c9b Jan 27 14:29:05 crc kubenswrapper[4698]: W0127 14:29:05.528858 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bbd0d35d81574851c8468dd7d1d88bca9a7e220d2b69630af1269b280ceb7b5d WatchSource:0}: Error finding container bbd0d35d81574851c8468dd7d1d88bca9a7e220d2b69630af1269b280ceb7b5d: Status 404 returned error can't find the container with id bbd0d35d81574851c8468dd7d1d88bca9a7e220d2b69630af1269b280ceb7b5d Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.541690 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Jan 27 14:29:05 crc kubenswrapper[4698]: W0127 14:29:05.757328 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.757416 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.762048 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.764117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.764160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.764172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.764204 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: E0127 14:29:05.768021 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.935792 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.940055 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:58:26.801287901 +0000 UTC Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.994802 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"76da9f2f208900a91504e077d3885b9cf3d6ae80e23045e6977f41abc46f9510"} Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.995570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cdb2965a80897728b5d5dc605ed3f21ccd7da6831d0e814505f94e32d2c1a6b3"} Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.996534 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bbd0d35d81574851c8468dd7d1d88bca9a7e220d2b69630af1269b280ceb7b5d"} Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.997417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4e7da0959dfc7f060f047f79e0da04d6496d57290f03c2f04ac4e090ef2a3c9b"} Jan 27 14:29:05 crc kubenswrapper[4698]: I0127 14:29:05.998259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3e797caaf1e4d0d5c91d7460afed8b39140a43e163691ff7bdaf0119138935ab"} Jan 27 14:29:06 crc kubenswrapper[4698]: W0127 14:29:06.150705 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.150851 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:06 crc kubenswrapper[4698]: W0127 14:29:06.339969 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.340071 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.343563 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Jan 27 14:29:06 crc kubenswrapper[4698]: W0127 14:29:06.534230 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.534330 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.568492 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.569857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.569924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.569937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.569987 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.570602 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.884247 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:29:06 crc kubenswrapper[4698]: E0127 14:29:06.885464 4698 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.936133 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:06 crc kubenswrapper[4698]: I0127 14:29:06.940207 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:52:57.224537188 +0000 UTC Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.003276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.003332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.003343 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.004949 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b" exitCode=0 Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.005046 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.005072 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.005916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.005947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.005955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.006603 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="98db7f3d3019e28062f57111220875909c51f1644b93f0e7ad4e14575cf3abcf" exitCode=0 Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.006713 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"98db7f3d3019e28062f57111220875909c51f1644b93f0e7ad4e14575cf3abcf"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.006739 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.007403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.007430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.007439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.007453 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.008147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.008175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.008186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.009445 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188" exitCode=0 Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.009494 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.009518 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.010146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.010165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.010177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.011509 4698 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b" exitCode=0 Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.011526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b"} Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.011583 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.012106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.012126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.012134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:07 crc kubenswrapper[4698]: W0127 14:29:07.692574 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:07 crc kubenswrapper[4698]: E0127 14:29:07.692729 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.935426 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:07 crc kubenswrapper[4698]: I0127 14:29:07.940531 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:55:22.393930084 +0000 UTC Jan 27 14:29:07 crc kubenswrapper[4698]: E0127 14:29:07.944576 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.019940 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="01a79834dc1d2246c3518e1aae6d806f0851840c259b012da305149433627fa2" exitCode=0 Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.020058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"01a79834dc1d2246c3518e1aae6d806f0851840c259b012da305149433627fa2"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.020092 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.021426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.021467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.021484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.021910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.021962 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.023180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.023206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.023217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.025790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.025821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.025837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.025905 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.027096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.027131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.027145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.029224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.029321 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.030304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.030348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.030363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.033109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.033140 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.033155 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de"} Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.170710 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.171882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.171935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.171948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.171976 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:08 crc kubenswrapper[4698]: E0127 14:29:08.172434 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Jan 27 14:29:08 crc kubenswrapper[4698]: W0127 14:29:08.201319 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:08 crc kubenswrapper[4698]: E0127 14:29:08.201428 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4698]: W0127 14:29:08.235065 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:08 crc kubenswrapper[4698]: E0127 14:29:08.235135 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4698]: W0127 14:29:08.413604 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:08 crc kubenswrapper[4698]: E0127 14:29:08.413709 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.775226 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.936760 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.936823 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Jan 27 14:29:08 crc kubenswrapper[4698]: I0127 14:29:08.940925 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:28:01.883777378 +0000 UTC Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.037786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129"} Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.037835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb"} Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.037857 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.038668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.038716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.038729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.039680 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="34ab310837721b7012a0be773e777ae2b611be1e4143b44548ad3d4f93909a04" exitCode=0 Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.039759 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.039781 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.039906 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.039922 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"34ab310837721b7012a0be773e777ae2b611be1e4143b44548ad3d4f93909a04"} Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040026 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.040998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.041460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.041483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.041492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.330518 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:09 crc kubenswrapper[4698]: I0127 14:29:09.941083 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:22:52.031520505 +0000 UTC Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67a6817e50b9384743f6881c733af45511ee78ad9a3a7cea4d3e7e4e1c394e73"} Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045915 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"807d7a009156f31527d166edcbe520f3a479c730f56f9e946f29e49734f72826"} Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d5b1f6504828ab5deb1c86b048c3e766ba0983cf813b1751a077a3105c21754a"} Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5f8af55661c2961a592f176699c05742ca89cc5df26d1b96747403a59970eda5"} Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045947 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045978 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.046028 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4cbd77e460eb98eaa68e630886ad37c10e9b1c40828629431652eafbfea2b76b"} Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.046032 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.045980 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.047937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.913055 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:29:10 crc kubenswrapper[4698]: I0127 14:29:10.941808 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:26:54.266628533 +0000 UTC Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.048298 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.048344 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.049523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.049557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.049567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.049923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.050044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.050122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.278569 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.373063 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.374212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.374242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.374250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.374274 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.513493 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 14:29:11 crc kubenswrapper[4698]: I0127 14:29:11.942770 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:00:14.380181232 +0000 UTC Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.050715 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.050734 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.051881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.687011 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.687187 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.687224 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.688396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.688449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.688459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.933985 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 14:29:12 crc kubenswrapper[4698]: I0127 14:29:12.943627 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:53:16.975341343 +0000 UTC Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.053216 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.053965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.053996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.054008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.662826 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.662976 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.663011 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.664159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.664254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.664280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.944576 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:27:48.526052751 +0000 UTC Jan 27 14:29:13 crc kubenswrapper[4698]: I0127 14:29:13.952912 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.055906 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.056959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.057006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.057023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.279628 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.279771 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:29:14 crc kubenswrapper[4698]: I0127 14:29:14.945392 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:26:37.961461247 +0000 UTC Jan 27 14:29:15 crc kubenswrapper[4698]: E0127 14:29:15.064806 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:29:15 crc kubenswrapper[4698]: I0127 14:29:15.946157 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:35:22.9521601 +0000 UTC Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.891996 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.892170 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.893654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.893697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.893711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.896230 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:16 crc kubenswrapper[4698]: I0127 14:29:16.946683 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:49:54.96483971 +0000 UTC Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.061070 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.061898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.061933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.061977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.067511 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:17 crc kubenswrapper[4698]: I0127 14:29:17.947368 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:24:20.635150787 +0000 UTC Jan 27 14:29:18 crc kubenswrapper[4698]: I0127 14:29:18.063471 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:18 crc kubenswrapper[4698]: I0127 14:29:18.064477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:18 crc kubenswrapper[4698]: I0127 14:29:18.064510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:18 crc kubenswrapper[4698]: I0127 14:29:18.064521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:18 crc kubenswrapper[4698]: I0127 14:29:18.947968 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:05:03.182188365 +0000 UTC Jan 27 14:29:19 crc kubenswrapper[4698]: I0127 14:29:19.312750 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:29:19 crc kubenswrapper[4698]: I0127 14:29:19.312825 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:29:19 crc kubenswrapper[4698]: I0127 14:29:19.319031 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:29:19 crc kubenswrapper[4698]: I0127 14:29:19.319099 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:29:19 crc kubenswrapper[4698]: I0127 14:29:19.948619 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:50:01.260679258 +0000 UTC Jan 27 14:29:20 crc kubenswrapper[4698]: I0127 14:29:20.949862 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:11:12.05909709 +0000 UTC Jan 27 14:29:21 crc kubenswrapper[4698]: I0127 14:29:21.950505 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:58:21.159170675 +0000 UTC Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.692061 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.692238 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.693626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.693704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.693717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.697579 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.951545 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:48:22.700331504 +0000 UTC Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.958578 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.958766 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.959806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.959855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.959867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:22 crc kubenswrapper[4698]: I0127 14:29:22.970410 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.074254 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.074316 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.075357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:23 crc kubenswrapper[4698]: I0127 14:29:23.952082 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:54:08.02675394 +0000 UTC Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.278916 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.279012 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:29:24 crc kubenswrapper[4698]: E0127 14:29:24.306172 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.309310 4698 trace.go:236] Trace[656481808]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:29:12.609) (total time: 11700ms): Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[656481808]: ---"Objects listed" error: 11700ms (14:29:24.309) Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[656481808]: [11.700135229s] [11.700135229s] END Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.309349 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.309843 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.310501 4698 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.311711 4698 trace.go:236] Trace[1951670197]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:29:12.659) (total time: 11652ms): Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[1951670197]: ---"Objects listed" error: 11652ms (14:29:24.311) Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[1951670197]: [11.652645162s] [11.652645162s] END Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.311844 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:29:24 crc kubenswrapper[4698]: E0127 14:29:24.312131 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.312239 4698 trace.go:236] Trace[1483624092]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:29:12.134) (total time: 12177ms): Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[1483624092]: ---"Objects listed" error: 12177ms (14:29:24.312) Jan 27 14:29:24 crc kubenswrapper[4698]: Trace[1483624092]: [12.177439294s] [12.177439294s] END Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.312264 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.319802 4698 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.337712 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48418->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.337788 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48418->192.168.126.11:17697: read: connection reset by peer" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338143 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338192 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338552 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338605 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338705 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48416->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.338726 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48416->192.168.126.11:17697: read: connection reset by peer" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.932104 4698 apiserver.go:52] "Watching apiserver" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.935418 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.935702 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.937692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:24 crc kubenswrapper[4698]: E0127 14:29:24.937796 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.938024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.940055 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.940146 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.940272 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:24 crc kubenswrapper[4698]: E0127 14:29:24.940148 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.940464 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:24 crc kubenswrapper[4698]: E0127 14:29:24.940766 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.941887 4698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.942587 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.942816 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.942934 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.943052 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.943188 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.943327 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.943411 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.943521 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.944500 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.953043 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:36:52.168500133 +0000 UTC Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.973133 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.987333 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:24 crc kubenswrapper[4698]: I0127 14:29:24.999701 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.009221 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015598 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015661 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015691 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015721 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015745 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015767 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015789 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015832 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015857 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015877 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.015959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016011 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016054 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016116 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016140 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016185 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016205 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016224 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016243 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016278 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016302 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016324 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016348 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016369 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016423 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016449 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016471 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016523 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016549 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016572 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016595 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016618 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016656 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016678 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016706 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016759 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016782 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016805 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016829 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016875 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016917 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016963 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017012 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017058 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017079 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017103 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017125 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017147 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017304 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017333 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017358 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017381 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017402 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017475 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017496 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017542 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017590 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017652 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017675 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016114 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016138 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016121 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016249 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017735 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016272 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016286 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016373 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016408 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017787 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017808 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017827 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017917 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017940 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018029 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018079 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018103 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018125 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018150 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018173 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018196 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018240 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018262 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018284 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018306 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018329 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018351 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018376 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018397 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018419 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018445 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018467 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018489 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018536 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018583 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018628 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018672 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018732 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018778 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018826 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018876 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018898 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018920 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018943 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018965 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018990 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019013 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019036 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019064 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019106 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019152 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019200 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019223 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019273 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019334 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019358 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019382 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019406 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019453 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019527 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021801 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021893 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021996 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022024 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022063 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022097 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022157 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022190 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022221 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022251 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022286 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022344 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022411 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022443 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022471 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022507 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022542 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022576 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022609 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022733 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022767 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022801 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022857 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022920 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022977 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023065 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023128 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023156 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023187 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023287 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023418 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023479 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023516 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023582 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023618 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023680 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023713 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023794 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023813 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023834 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023850 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023865 4698 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023880 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023900 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023914 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023932 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.024196 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016555 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016707 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016742 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016951 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.016972 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017071 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017134 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017205 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021439 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017613 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017676 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017684 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017869 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017884 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018132 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018256 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018457 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018517 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018583 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018746 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018783 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018814 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018837 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018902 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018940 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018969 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.018994 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019198 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019228 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.024818 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.019311 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.020855 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.020879 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.020939 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.020947 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.020995 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021029 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021433 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021622 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021677 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021662 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021749 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025359 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025392 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025395 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025551 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025834 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.025896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.022512 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023147 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023535 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.023787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.024050 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.017310 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.026035 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.026075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.026095 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:25.525720216 +0000 UTC m=+21.202497701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.021934 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.026468 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.026633 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.026662 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.027072 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.027627 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.027666 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.027676 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.027829 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.028317 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.028678 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.024736 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.028678 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.028710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.028754 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.029041 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.029108 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.029870 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.029882 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.030064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.031686 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.031820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.031858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.032292 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.032514 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.032776 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.032725 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.032950 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033032 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033023 4698 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033092 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033128 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033180 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033246 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033521 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033861 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033867 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033883 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.033925 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034032 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034294 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034296 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034416 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034448 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034462 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034542 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034812 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034822 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034966 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034867 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034826 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034888 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034895 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.034951 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035122 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035169 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035300 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.035612 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035684 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.035696 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:25.53567341 +0000 UTC m=+21.212450955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035778 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035829 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.035864 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.035927 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:25.535917026 +0000 UTC m=+21.212694491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.035955 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036190 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036321 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036325 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036592 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036756 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036770 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036816 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.036892 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037040 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037271 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037353 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037513 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037837 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037911 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.038132 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.037730 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.038975 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.039285 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.043223 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.043391 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.043811 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.044191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.045453 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.045927 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.046294 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.046943 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.047068 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.048463 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.048490 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.048504 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.048566 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:25.548548014 +0000 UTC m=+21.225325569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.049966 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.050289 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.050471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.050848 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.050875 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.050886 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.050946 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:25.550925991 +0000 UTC m=+21.227703446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.051712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.051558 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.051902 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.052145 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.052218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.052336 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.052620 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.052667 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.053002 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.053249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.051569 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.055921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.056474 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.056599 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.056367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.056764 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.057076 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.057308 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.058975 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.064144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.068276 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.071401 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.073282 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.077169 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.078334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.080757 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.082069 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129" exitCode=255 Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.082105 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129"} Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.085162 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.093605 4698 scope.go:117] "RemoveContainer" containerID="9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.093972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.094944 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.101757 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.111127 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.119938 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.124984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125146 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125244 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125282 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125286 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125295 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125325 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125340 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125351 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125361 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125372 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125381 4698 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125392 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125408 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125423 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125434 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125444 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125455 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125466 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125478 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125489 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125502 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125514 4698 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125534 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125546 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125557 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125571 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125584 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125596 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125608 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125619 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125648 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125661 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125673 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125683 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125694 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125707 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125718 4698 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125730 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125741 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125752 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125763 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125775 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125787 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125798 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125810 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125821 4698 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125833 4698 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125844 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125855 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125867 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125879 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125890 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125901 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125911 4698 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125923 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125934 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125945 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125956 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125967 4698 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125978 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.125990 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126001 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126012 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126024 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126034 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126045 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126058 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126069 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126080 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126091 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126103 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126114 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126125 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126137 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126199 4698 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126220 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126235 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126233 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126246 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126330 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126343 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126353 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126362 4698 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126372 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126382 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126392 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126401 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126415 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126427 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126439 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126455 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126466 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126475 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126484 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126493 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126504 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126513 4698 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126522 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126577 4698 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126591 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126648 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.126664 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128207 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128252 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128262 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128278 4698 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128287 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128295 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128309 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128318 4698 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128349 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128358 4698 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128367 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128383 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128391 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128400 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128408 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128416 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128427 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128437 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128450 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128461 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128695 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128707 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128717 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128727 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128739 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128748 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128759 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128769 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128780 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128790 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128801 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128811 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128822 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128833 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128842 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128851 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.128859 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129092 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129105 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129183 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129193 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129202 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129211 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129219 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129237 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129246 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129254 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129262 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129272 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129280 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129289 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129299 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129308 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129316 4698 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129324 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129334 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129344 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129355 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129366 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129377 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129392 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129401 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129409 4698 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129417 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129428 4698 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129438 4698 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129449 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129457 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129499 4698 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129508 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129516 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129524 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129533 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129543 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129551 4698 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129560 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129568 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129576 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129585 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129593 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129601 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129609 4698 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129618 4698 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129627 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129649 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.129777 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.139473 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.152539 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.162502 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.171997 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.183464 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.255716 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.263492 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:29:25 crc kubenswrapper[4698]: W0127 14:29:25.265411 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-52ed454cd177d94783f49b0aa93c4c11ce648077724dc550ebf2462aee254fb1 WatchSource:0}: Error finding container 52ed454cd177d94783f49b0aa93c4c11ce648077724dc550ebf2462aee254fb1: Status 404 returned error can't find the container with id 52ed454cd177d94783f49b0aa93c4c11ce648077724dc550ebf2462aee254fb1 Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.268989 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:29:25 crc kubenswrapper[4698]: W0127 14:29:25.276882 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b27f0e184fdef57ac9ffda4b481a554e81cc44199b8d27cb3bc241ed681bf88c WatchSource:0}: Error finding container b27f0e184fdef57ac9ffda4b481a554e81cc44199b8d27cb3bc241ed681bf88c: Status 404 returned error can't find the container with id b27f0e184fdef57ac9ffda4b481a554e81cc44199b8d27cb3bc241ed681bf88c Jan 27 14:29:25 crc kubenswrapper[4698]: W0127 14:29:25.280530 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fb0ddb52a9a4acb17df521fe771f1d1fea746f956397b3ade680b4953222b9d2 WatchSource:0}: Error finding container fb0ddb52a9a4acb17df521fe771f1d1fea746f956397b3ade680b4953222b9d2: Status 404 returned error can't find the container with id fb0ddb52a9a4acb17df521fe771f1d1fea746f956397b3ade680b4953222b9d2 Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.533105 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.533306 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:26.533277909 +0000 UTC m=+22.210055404 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.634382 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.634429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.634451 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.634479 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634584 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634611 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634661 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634677 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634617 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634746 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634752 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634735 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:26.634718705 +0000 UTC m=+22.311496170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634586 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634897 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:26.634876329 +0000 UTC m=+22.311653794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634912 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:26.634905831 +0000 UTC m=+22.311683296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.634924 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:26.634918481 +0000 UTC m=+22.311695946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.953987 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:23:30.205438438 +0000 UTC Jan 27 14:29:25 crc kubenswrapper[4698]: I0127 14:29:25.991301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:25 crc kubenswrapper[4698]: E0127 14:29:25.991434 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.087299 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.088806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.089072 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.089564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fb0ddb52a9a4acb17df521fe771f1d1fea746f956397b3ade680b4953222b9d2"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.090854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.090883 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.090898 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b27f0e184fdef57ac9ffda4b481a554e81cc44199b8d27cb3bc241ed681bf88c"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.092207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.092227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"52ed454cd177d94783f49b0aa93c4c11ce648077724dc550ebf2462aee254fb1"} Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.102945 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.114218 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.128675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.160553 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.184248 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.200610 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.215314 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.230907 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.245156 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.258834 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.273172 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.284569 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.301455 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.317490 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.541440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.541581 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:28.541557627 +0000 UTC m=+24.218335092 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.642659 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.642703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.642726 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.642754 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642847 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642878 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642880 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642901 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642929 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:28.642914021 +0000 UTC m=+24.319691476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642873 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642946 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:28.642939572 +0000 UTC m=+24.319717037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.642971 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:28.642956282 +0000 UTC m=+24.319733747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.643088 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.643135 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.643152 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.643230 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:28.643206789 +0000 UTC m=+24.319984294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.955160 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:40:10.834653228 +0000 UTC Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.991597 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.991614 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.991933 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:26 crc kubenswrapper[4698]: E0127 14:29:26.992007 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.995441 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.996176 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.997062 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.997735 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.998313 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.998806 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 14:29:26 crc kubenswrapper[4698]: I0127 14:29:26.999454 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.000169 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.001007 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.001670 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.002378 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.004569 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.005434 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.006124 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.007438 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.008720 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.009903 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.010391 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.011072 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.012254 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.012834 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.013954 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.014453 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.015623 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.016159 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.016884 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.018158 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.018730 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.019858 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.020440 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.021424 4698 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.021564 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.023787 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.024421 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.025419 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.027185 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.028107 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.028715 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.029411 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.030188 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.030837 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.031464 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.032194 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.032848 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.033331 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.034138 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.034742 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.035509 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.036049 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.036560 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.040242 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.041040 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.041696 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.042589 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.955339 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:36:22.151410821 +0000 UTC Jan 27 14:29:27 crc kubenswrapper[4698]: I0127 14:29:27.991847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:27 crc kubenswrapper[4698]: E0127 14:29:27.991969 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.098404 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96"} Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.141267 4698 csr.go:261] certificate signing request csr-ft9tr is approved, waiting to be issued Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.141759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.164427 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.175674 4698 csr.go:257] certificate signing request csr-ft9tr is issued Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.184998 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.198759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.215301 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.232606 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.245552 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.559924 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.560066 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:32.560047271 +0000 UTC m=+28.236824726 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.661309 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.661361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.661385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.661410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661480 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661506 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661526 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661557 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661562 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661576 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:32.661561989 +0000 UTC m=+28.338339454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661694 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:32.661666462 +0000 UTC m=+28.338443937 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661708 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:32.661701502 +0000 UTC m=+28.338478957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661485 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661732 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661746 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.661784 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:32.661770094 +0000 UTC m=+28.338547559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.882981 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ndrd6"] Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.883312 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-g9vj8"] Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.883489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.883538 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.883712 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vg6nd"] Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.884322 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-g2kkn"] Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.884473 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.884563 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2kkn" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.885421 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.885913 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887016 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887188 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887525 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887830 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887872 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888140 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888267 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888290 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.887832 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888388 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888454 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888476 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.888540 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.900855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.915064 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.929711 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.943261 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.956216 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:09:29.595895123 +0000 UTC Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.965863 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.983732 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.991154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:28 crc kubenswrapper[4698]: I0127 14:29:28.991233 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.991295 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:28 crc kubenswrapper[4698]: E0127 14:29:28.991709 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.000213 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.017481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.034618 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.048254 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.062817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065348 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-cnibin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-conf-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065479 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3e403fc5-7005-474c-8c75-b7906b481677-rootfs\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-etc-kubernetes\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-system-cni-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065632 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-os-release\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065704 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065723 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e403fc5-7005-474c-8c75-b7906b481677-proxy-tls\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065761 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l66\" (UniqueName: \"kubernetes.io/projected/4e135f0c-0c36-44f4-afeb-06994affb352-kube-api-access-97l66\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065814 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-k8s-cni-cncf-io\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065833 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-multus\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065889 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66snv\" (UniqueName: \"kubernetes.io/projected/e045926d-2303-47ea-b25d-dc23982427e4-kube-api-access-66snv\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065921 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-hosts-file\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-cni-binary-copy\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.065970 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-multus-certs\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-socket-dir-parent\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-multus-daemon-config\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-kubelet\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7ft\" (UniqueName: \"kubernetes.io/projected/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-kube-api-access-bk7ft\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066164 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2dz\" (UniqueName: \"kubernetes.io/projected/3e403fc5-7005-474c-8c75-b7906b481677-kube-api-access-tt2dz\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-bin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066207 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-netns\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066240 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e403fc5-7005-474c-8c75-b7906b481677-mcd-auth-proxy-config\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-os-release\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-cnibin\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066371 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-system-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.066439 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-hostroot\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.080279 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.095565 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.113506 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.126169 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.140106 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-cnibin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-conf-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3e403fc5-7005-474c-8c75-b7906b481677-rootfs\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-etc-kubernetes\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-system-cni-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167828 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-os-release\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167901 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e403fc5-7005-474c-8c75-b7906b481677-proxy-tls\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-cnibin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167996 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3e403fc5-7005-474c-8c75-b7906b481677-rootfs\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-etc-kubernetes\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-conf-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168005 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.167987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-system-cni-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l66\" (UniqueName: \"kubernetes.io/projected/4e135f0c-0c36-44f4-afeb-06994affb352-kube-api-access-97l66\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-k8s-cni-cncf-io\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168266 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-k8s-cni-cncf-io\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-multus\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168308 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66snv\" (UniqueName: \"kubernetes.io/projected/e045926d-2303-47ea-b25d-dc23982427e4-kube-api-access-66snv\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-hosts-file\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168369 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-cni-binary-copy\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-multus\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-multus-certs\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168401 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-hosts-file\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-socket-dir-parent\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168505 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-multus-daemon-config\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168530 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-kubelet\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-os-release\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7ft\" (UniqueName: \"kubernetes.io/projected/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-kube-api-access-bk7ft\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168583 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-kubelet\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2dz\" (UniqueName: \"kubernetes.io/projected/3e403fc5-7005-474c-8c75-b7906b481677-kube-api-access-tt2dz\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-multus-socket-dir-parent\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-multus-certs\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-bin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-var-lib-cni-bin\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e045926d-2303-47ea-b25d-dc23982427e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168721 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-netns\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-host-run-netns\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e403fc5-7005-474c-8c75-b7906b481677-mcd-auth-proxy-config\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-os-release\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-cnibin\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-system-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-hostroot\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168875 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-os-release\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-cnibin\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-hostroot\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.168975 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e135f0c-0c36-44f4-afeb-06994affb352-system-cni-dir\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.169206 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-cni-binary-copy\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.169249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e135f0c-0c36-44f4-afeb-06994affb352-multus-daemon-config\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.169604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e403fc5-7005-474c-8c75-b7906b481677-mcd-auth-proxy-config\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.173266 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e403fc5-7005-474c-8c75-b7906b481677-proxy-tls\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.178598 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e045926d-2303-47ea-b25d-dc23982427e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.178781 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 14:24:28 +0000 UTC, rotation deadline is 2026-12-14 00:32:51.04028342 +0000 UTC Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.178840 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7690h3m21.86144667s for next certificate rotation Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.180139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.197301 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66snv\" (UniqueName: \"kubernetes.io/projected/e045926d-2303-47ea-b25d-dc23982427e4-kube-api-access-66snv\") pod \"multus-additional-cni-plugins-vg6nd\" (UID: \"e045926d-2303-47ea-b25d-dc23982427e4\") " pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.201728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l66\" (UniqueName: \"kubernetes.io/projected/4e135f0c-0c36-44f4-afeb-06994affb352-kube-api-access-97l66\") pod \"multus-g2kkn\" (UID: \"4e135f0c-0c36-44f4-afeb-06994affb352\") " pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.210978 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.211347 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2dz\" (UniqueName: \"kubernetes.io/projected/3e403fc5-7005-474c-8c75-b7906b481677-kube-api-access-tt2dz\") pod \"machine-config-daemon-ndrd6\" (UID: \"3e403fc5-7005-474c-8c75-b7906b481677\") " pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.211425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7ft\" (UniqueName: \"kubernetes.io/projected/2776dfc9-913b-42b0-9cf2-6fea98d83bc9-kube-api-access-bk7ft\") pod \"node-resolver-g9vj8\" (UID: \"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\") " pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.211822 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2kkn" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.220798 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.229826 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: W0127 14:29:29.240675 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode045926d_2303_47ea_b25d_dc23982427e4.slice/crio-68f521792f52fa83374668e39f448446577220f8a0e2ddc842859abaa3d20f60 WatchSource:0}: Error finding container 68f521792f52fa83374668e39f448446577220f8a0e2ddc842859abaa3d20f60: Status 404 returned error can't find the container with id 68f521792f52fa83374668e39f448446577220f8a0e2ddc842859abaa3d20f60 Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.253251 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xmpm6"] Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.254183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263047 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263383 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263757 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263912 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.263785 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.264205 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.279694 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.301420 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.312932 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.328475 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.343087 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.355186 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.370349 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371629 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gpg\" (UniqueName: \"kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371690 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371706 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371723 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371793 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371856 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371908 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371951 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371965 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371980 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.371994 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.372011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.372030 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.372046 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.383685 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.397122 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.409133 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.424434 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.444589 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.466993 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473236 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473352 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473361 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473370 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473372 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473460 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473510 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473512 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473531 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473552 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.473800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gpg\" (UniqueName: \"kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474117 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474146 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474173 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474208 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474222 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.474501 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.475495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.476514 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.489482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gpg\" (UniqueName: \"kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg\") pod \"ovnkube-node-xmpm6\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.497198 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-g9vj8" Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.504450 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:29:29 crc kubenswrapper[4698]: W0127 14:29:29.519755 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2776dfc9_913b_42b0_9cf2_6fea98d83bc9.slice/crio-555ac553b4476939b38bf9f55356af5789ac8677b97a3874cdadc0df1678b925 WatchSource:0}: Error finding container 555ac553b4476939b38bf9f55356af5789ac8677b97a3874cdadc0df1678b925: Status 404 returned error can't find the container with id 555ac553b4476939b38bf9f55356af5789ac8677b97a3874cdadc0df1678b925 Jan 27 14:29:29 crc kubenswrapper[4698]: W0127 14:29:29.521322 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e403fc5_7005_474c_8c75_b7906b481677.slice/crio-a2ddecf5b14a80886c455814185b6129b3523b798c490656404e5ebd5fbe3f5b WatchSource:0}: Error finding container a2ddecf5b14a80886c455814185b6129b3523b798c490656404e5ebd5fbe3f5b: Status 404 returned error can't find the container with id a2ddecf5b14a80886c455814185b6129b3523b798c490656404e5ebd5fbe3f5b Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.583698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:29 crc kubenswrapper[4698]: W0127 14:29:29.650144 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59a9d01_79ce_42d9_a41d_39d7d73cb03e.slice/crio-def4956a0c71ce17e5e156a028a92f25cde69642c1da1c485d5532854ba70206 WatchSource:0}: Error finding container def4956a0c71ce17e5e156a028a92f25cde69642c1da1c485d5532854ba70206: Status 404 returned error can't find the container with id def4956a0c71ce17e5e156a028a92f25cde69642c1da1c485d5532854ba70206 Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.957254 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:30:26.424471652 +0000 UTC Jan 27 14:29:29 crc kubenswrapper[4698]: I0127 14:29:29.991870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:29 crc kubenswrapper[4698]: E0127 14:29:29.992017 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.104412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-g9vj8" event={"ID":"2776dfc9-913b-42b0-9cf2-6fea98d83bc9","Type":"ContainerStarted","Data":"f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.104476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-g9vj8" event={"ID":"2776dfc9-913b-42b0-9cf2-6fea98d83bc9","Type":"ContainerStarted","Data":"555ac553b4476939b38bf9f55356af5789ac8677b97a3874cdadc0df1678b925"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.106857 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.106898 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.106913 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"a2ddecf5b14a80886c455814185b6129b3523b798c490656404e5ebd5fbe3f5b"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.108605 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c" exitCode=0 Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.108673 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.108721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerStarted","Data":"68f521792f52fa83374668e39f448446577220f8a0e2ddc842859abaa3d20f60"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.110312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerStarted","Data":"6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.110347 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerStarted","Data":"90e74a599cce55dd00407646716625418442bcffcb2639a5747210d96692909a"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.111746 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" exitCode=0 Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.111774 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.111789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"def4956a0c71ce17e5e156a028a92f25cde69642c1da1c485d5532854ba70206"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.121861 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.141667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.154017 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.166486 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.179451 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.193854 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.210018 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.221710 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.236372 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.256472 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.271019 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.283944 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.301671 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.319257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.333369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.345253 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.359146 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.376568 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.389139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.399058 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.410134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.420937 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.433745 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.447046 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.712315 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.714077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.714205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.714295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.714472 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.720723 4698 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.720958 4698 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.722481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.722532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.722544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.722562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.722575 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.739741 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.743651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.743692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.743706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.743722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.743734 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.756001 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.759446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.759691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.759980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.760068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.760273 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.773383 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.777984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.778032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.778041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.778057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.778068 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.791235 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.795275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.795305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.795313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.795327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.795339 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.807192 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.807369 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.808895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.808925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.808935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.808953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.808964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.911331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.911402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.911414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.911445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.911476 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:30Z","lastTransitionTime":"2026-01-27T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.958266 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:43:04.848188115 +0000 UTC Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.991924 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:30 crc kubenswrapper[4698]: I0127 14:29:30.991937 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.992054 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:30 crc kubenswrapper[4698]: E0127 14:29:30.992169 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.014311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.014549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.014566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.014584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.014595 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.117110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.117152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.117163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.117176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.117185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.118684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerStarted","Data":"aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123591 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123723 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123733 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.123743 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.142996 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.167667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.184893 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.198988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.214621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.219992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.220034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.220045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.220062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.220075 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.235003 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.259830 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.291132 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.300392 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.301030 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.309082 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.322720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.322775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.322790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.322809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.322820 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.324062 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.336485 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.351913 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.367693 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.382829 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.395570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.409165 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.423624 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.425873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.425902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.425913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.425958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.425969 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.434310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.445792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.461182 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.481752 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.497091 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.512414 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.526594 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.528948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.528989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.529000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.529015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.529027 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.539258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.553675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.632053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.632101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.632112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.632128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.632141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.735001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.735039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.735050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.735065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.735077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.827565 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-flx9b"] Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.827989 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.829708 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.829749 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.829806 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.831053 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.837241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.837285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.837298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.837316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.837331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.841209 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.855299 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.867805 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.880807 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.895745 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.896456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-serviceca\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.896503 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-host\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.896525 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqpw\" (UniqueName: \"kubernetes.io/projected/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-kube-api-access-spqpw\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.915652 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.930262 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.940245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.940293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.940306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.940324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.940336 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:31Z","lastTransitionTime":"2026-01-27T14:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.942537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.955795 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.958374 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:32:29.240899337 +0000 UTC Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.971460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.983144 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.991811 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:31 crc kubenswrapper[4698]: E0127 14:29:31.991928 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.995857 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.997455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqpw\" (UniqueName: \"kubernetes.io/projected/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-kube-api-access-spqpw\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.997542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-serviceca\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.997576 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-host\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.997692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-host\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:31 crc kubenswrapper[4698]: I0127 14:29:31.999095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-serviceca\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.013139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.025206 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqpw\" (UniqueName: \"kubernetes.io/projected/2c9fcf55-4a50-4a87-937b-975bc7e00bfa-kube-api-access-spqpw\") pod \"node-ca-flx9b\" (UID: \"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\") " pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.026919 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.042620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.042681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.042693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.042709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.042721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.129525 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91" exitCode=0 Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.129554 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2" exitCode=0 Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.129611 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.129706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144433 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.144943 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.169852 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.192369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.195385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flx9b" Jan 27 14:29:32 crc kubenswrapper[4698]: W0127 14:29:32.213279 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9fcf55_4a50_4a87_937b_975bc7e00bfa.slice/crio-8e556e2592292efda632f282da2228a5e414ed97dc1cf8a8803e86e405179342 WatchSource:0}: Error finding container 8e556e2592292efda632f282da2228a5e414ed97dc1cf8a8803e86e405179342: Status 404 returned error can't find the container with id 8e556e2592292efda632f282da2228a5e414ed97dc1cf8a8803e86e405179342 Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.220551 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.235511 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.248600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.248651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.248660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.248674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.248685 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.250252 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.261300 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.277667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.288207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.302389 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.320109 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.336188 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.350125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.350152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.350159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.350173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.350181 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.352878 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.366368 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.452827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.452860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.452868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.452882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.452891 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.555221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.555256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.555265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.555279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.555289 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.601809 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.601918 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:40.601897227 +0000 UTC m=+36.278674702 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.659362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.659421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.659432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.659449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.659460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.702325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.702389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.702414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.702444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702567 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702590 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702602 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702622 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702687 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:40.702669257 +0000 UTC m=+36.379446722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702625 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702768 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:40.702736499 +0000 UTC m=+36.379513974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702787 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702809 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702875 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:40.702853652 +0000 UTC m=+36.379631187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.702987 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.703108 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:40.703088218 +0000 UTC m=+36.379865733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.762366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.762434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.762451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.762476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.762494 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.865285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.865340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.865353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.865375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.865388 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.958903 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:40:11.438378446 +0000 UTC Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.968085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.968437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.968449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.968467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.968480 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:32Z","lastTransitionTime":"2026-01-27T14:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.991430 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:32 crc kubenswrapper[4698]: I0127 14:29:32.991543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.991575 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:32 crc kubenswrapper[4698]: E0127 14:29:32.991732 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.072044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.072093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.072103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.072120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.072131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.135208 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024" exitCode=0 Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.135281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.137712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flx9b" event={"ID":"2c9fcf55-4a50-4a87-937b-975bc7e00bfa","Type":"ContainerStarted","Data":"0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.137742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flx9b" event={"ID":"2c9fcf55-4a50-4a87-937b-975bc7e00bfa","Type":"ContainerStarted","Data":"8e556e2592292efda632f282da2228a5e414ed97dc1cf8a8803e86e405179342"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.148122 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.161748 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.174375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.174414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.174425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.174440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.174451 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.177322 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.190630 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.204501 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.217352 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.230099 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.243937 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.263042 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.277749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.277786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.277796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.277813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.277828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.282745 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.296972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.311238 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.325006 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.336624 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.354222 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.370168 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.380585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.380620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.380630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.380668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.380682 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.393946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.409056 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.420706 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.435297 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.455910 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.469105 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.479886 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.483342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.483374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.483385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.483401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.483411 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.491963 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.504778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.517245 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.527726 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.542503 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.586211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.586248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.586260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.586276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.586287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.689824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.689861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.689870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.689885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.689897 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.792072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.792115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.792125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.792141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.792152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.894577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.894620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.894631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.894690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.894705 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.959566 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:44:02.852218437 +0000 UTC Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.991907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:33 crc kubenswrapper[4698]: E0127 14:29:33.992022 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.996256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.996292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.996305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.996320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:33 crc kubenswrapper[4698]: I0127 14:29:33.996331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:33Z","lastTransitionTime":"2026-01-27T14:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.099376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.099415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.099425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.099440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.099450 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.147051 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerStarted","Data":"de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.151093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.163073 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.176249 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.188821 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.199258 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.201219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.201257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.201266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.201280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.201294 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.213242 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.231101 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.245237 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.256508 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.270848 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.282582 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.293926 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.304091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.304134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.304145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.304165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.304179 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.305585 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.319540 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.334792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.406323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.406362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.406372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.406389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.406401 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.509359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.509405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.509418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.509434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.509460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.612331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.612375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.612386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.612401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.612411 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.714788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.714834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.714846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.714863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.714874 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.802205 4698 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.817905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.818417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.818429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.818449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.818479 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.921062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.921115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.921127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.921143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.921155 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:34Z","lastTransitionTime":"2026-01-27T14:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.959984 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:21:14.962452604 +0000 UTC Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.991710 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:34 crc kubenswrapper[4698]: I0127 14:29:34.991836 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:34 crc kubenswrapper[4698]: E0127 14:29:34.991895 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:34 crc kubenswrapper[4698]: E0127 14:29:34.991977 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.009395 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023372 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.023579 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.038353 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.049290 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.062802 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.077855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.089445 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.102132 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.116411 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.125694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.125722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.125735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.125749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.125760 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.132835 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.145751 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.156928 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b" exitCode=0 Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.156983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.159788 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.175200 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.188529 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.200325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.210962 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.225097 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.229062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.229105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.229119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.229136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.229147 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.238418 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.252307 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.265470 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.281364 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.302046 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.318130 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.330931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.332950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.332998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.333012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.333032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.333044 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.344987 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.358125 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.371119 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.383288 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.435487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.435532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.435549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.435565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.435577 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.538126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.538160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.538177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.538221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.538234 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.640021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.640091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.640106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.640123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.640132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.742354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.742395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.742407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.742421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.742430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.846143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.846220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.846234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.846262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.846278 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.949713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.949755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.949766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.949784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.949796 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:35Z","lastTransitionTime":"2026-01-27T14:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.961099 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:11:40.661096063 +0000 UTC Jan 27 14:29:35 crc kubenswrapper[4698]: I0127 14:29:35.991410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:35 crc kubenswrapper[4698]: E0127 14:29:35.991567 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.051910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.051950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.051964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.051989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.051999 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.154785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.154856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.154874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.154897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.154914 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.260934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.260975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.260984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.260999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.261008 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.363035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.363088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.363097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.363113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.363123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.465166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.465207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.465215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.465229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.465239 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.567589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.567979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.567992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.568008 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.568019 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.670026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.670070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.670080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.670097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.670111 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.772537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.772567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.772575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.772590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.772599 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.874626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.874691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.874705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.874724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.874735 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.961805 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:08:38.35988459 +0000 UTC Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.976994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.977029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.977040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.977055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.977069 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:36Z","lastTransitionTime":"2026-01-27T14:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.991343 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:36 crc kubenswrapper[4698]: E0127 14:29:36.991472 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:36 crc kubenswrapper[4698]: I0127 14:29:36.991348 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:36 crc kubenswrapper[4698]: E0127 14:29:36.991542 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.079458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.079495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.079504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.079519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.079528 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.166313 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.166913 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.167108 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.171051 4698 generic.go:334] "Generic (PLEG): container finished" podID="e045926d-2303-47ea-b25d-dc23982427e4" containerID="14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074" exitCode=0 Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.171097 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerDied","Data":"14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.180838 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.182985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.183013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.183023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.183038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.183048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.196896 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.203918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.204874 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.213105 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.227483 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.237364 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.250078 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.266491 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.284938 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.286646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.286729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.286742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.286818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.286842 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.298926 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.310937 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.324498 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.335456 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.349493 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.364478 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.379853 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.394838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.394878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.394890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.394908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.394920 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.397988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.411225 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.424085 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.438464 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.452026 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.467308 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.482046 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497116 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.497420 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.510713 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.526025 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.537188 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.553746 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.575838 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.599713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.599962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.600113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.600405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.600600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.703247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.703684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.703828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.703963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.704089 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.806651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.806695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.806706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.806720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.806729 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.908770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.909127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.909299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.909443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.909581 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:37Z","lastTransitionTime":"2026-01-27T14:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.962942 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:13:01.727560615 +0000 UTC Jan 27 14:29:37 crc kubenswrapper[4698]: I0127 14:29:37.991299 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:37 crc kubenswrapper[4698]: E0127 14:29:37.991698 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.013897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.014250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.014322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.014419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.014511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.116486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.116523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.116535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.116550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.116560 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.178226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" event={"ID":"e045926d-2303-47ea-b25d-dc23982427e4","Type":"ContainerStarted","Data":"2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.178292 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.193241 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.205621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.219163 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.219320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.220013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.220042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.220062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.220107 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.230151 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.242800 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.253767 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.266735 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.282533 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.297884 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.310186 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.322618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.322689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.322700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.322718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.322730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.326966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.344392 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.359283 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.370023 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.425190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.425252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.425264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.425280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.425293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.527394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.527438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.527451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.527467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.527480 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.629590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.629674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.629688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.629705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.629716 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.732040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.732078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.732090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.732104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.732115 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.834769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.834814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.834824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.834843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.834855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.938716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.938761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.938770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.938784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.938795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:38Z","lastTransitionTime":"2026-01-27T14:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.963320 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:15:24.900871903 +0000 UTC Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.991832 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:38 crc kubenswrapper[4698]: I0127 14:29:38.991934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:38 crc kubenswrapper[4698]: E0127 14:29:38.991982 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:38 crc kubenswrapper[4698]: E0127 14:29:38.992052 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.041782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.041824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.041832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.041846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.041856 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.143629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.143682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.143691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.143704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.143715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.181035 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.246527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.246574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.246583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.246599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.246610 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.349143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.349201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.349213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.349231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.349241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.451408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.451461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.451471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.451488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.451498 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.553936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.553983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.553996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.554012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.554023 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.656240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.656317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.656342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.656367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.656381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.758462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.758503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.758515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.758531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.758542 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.861083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.861124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.861135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.861148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.861160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:39Z","lastTransitionTime":"2026-01-27T14:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.963692 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:23:54.596324133 +0000 UTC Jan 27 14:29:39 crc kubenswrapper[4698]: I0127 14:29:39.991694 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:39 crc kubenswrapper[4698]: E0127 14:29:39.991840 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.065014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.065061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.065072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.065089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.065100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.167941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.167995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.168007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.168028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.168042 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.271688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.271768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.271780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.271798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.271811 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.373917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.373957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.373967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.373983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.373994 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.476313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.476349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.476358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.476372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.476381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.582264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.582305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.582314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.582328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.582337 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.681952 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.682094 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:29:56.682071943 +0000 UTC m=+52.358849408 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.684962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.685017 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.685029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.685048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.685064 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.783006 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.783082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.783110 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.783140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783281 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783286 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783343 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783359 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783299 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783429 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:56.783402807 +0000 UTC m=+52.460180342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783440 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783292 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783480 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:56.783471379 +0000 UTC m=+52.460248844 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783497 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:56.783489549 +0000 UTC m=+52.460267114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783553 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.783599 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:56.783588832 +0000 UTC m=+52.460366377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.788154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.788192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.788203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.788219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.788231 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.891495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.891549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.891568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.891590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.891602 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.964300 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:22:56.615434704 +0000 UTC Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.992156 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.992212 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.992373 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:40 crc kubenswrapper[4698]: E0127 14:29:40.992548 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.993899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.993930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.993939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.993951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.993961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:40Z","lastTransitionTime":"2026-01-27T14:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:40 crc kubenswrapper[4698]: I0127 14:29:40.996853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.036303 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj"] Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.036730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.039509 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.039503 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.055386 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.074831 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.087114 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.087175 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftbv\" (UniqueName: \"kubernetes.io/projected/709dfdd7-f928-4f0b-8f5a-c356614219cb-kube-api-access-pftbv\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.087217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.087421 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.090149 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.096576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.096607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.096617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.096671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.096693 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.102411 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.114970 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.130150 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.144357 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.145560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.145591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.145604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.145622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.145657 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.157570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.159202 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.164752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.164815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.164826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.164845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.164857 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.176428 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.178163 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.182792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.182831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.182847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.182868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.182882 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.188768 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.188830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.188882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.188906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pftbv\" (UniqueName: \"kubernetes.io/projected/709dfdd7-f928-4f0b-8f5a-c356614219cb-kube-api-access-pftbv\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.189628 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.189925 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.190220 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.199709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/709dfdd7-f928-4f0b-8f5a-c356614219cb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.204491 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.207919 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.210227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.210295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.210320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.210350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.210376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.213241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pftbv\" (UniqueName: \"kubernetes.io/projected/709dfdd7-f928-4f0b-8f5a-c356614219cb-kube-api-access-pftbv\") pod \"ovnkube-control-plane-749d76644c-zpvcj\" (UID: \"709dfdd7-f928-4f0b-8f5a-c356614219cb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.224674 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.225206 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.230121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.230199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.230215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.230243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.230258 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.240972 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.241952 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.242117 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.244529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.244579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.244593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.244615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.244630 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.257787 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.270618 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.348255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.348297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.348306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.348320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.348329 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.351213 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" Jan 27 14:29:41 crc kubenswrapper[4698]: W0127 14:29:41.365129 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod709dfdd7_f928_4f0b_8f5a_c356614219cb.slice/crio-c21f246d3c7346141ac8b248b0ffe52b2bd350112547e55d885dfd182a1359e9 WatchSource:0}: Error finding container c21f246d3c7346141ac8b248b0ffe52b2bd350112547e55d885dfd182a1359e9: Status 404 returned error can't find the container with id c21f246d3c7346141ac8b248b0ffe52b2bd350112547e55d885dfd182a1359e9 Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.450987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.451031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.451043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.451061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.451076 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.557047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.557090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.557101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.557116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.557127 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.660824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.660894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.660913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.660938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.660954 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.764607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.764677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.764686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.764703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.764713 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.867114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.867154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.867164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.867178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.867190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.964518 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:08:13.337703127 +0000 UTC Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.970557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.970611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.970629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.970668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.970684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:41Z","lastTransitionTime":"2026-01-27T14:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:41 crc kubenswrapper[4698]: I0127 14:29:41.992218 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:41 crc kubenswrapper[4698]: E0127 14:29:41.992375 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.073054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.073104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.073114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.073133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.073150 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.177127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.177184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.177195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.177213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.177225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.190009 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/0.log" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.192521 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab" exitCode=1 Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.192577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.193281 4698 scope.go:117] "RemoveContainer" containerID="d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.196256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" event={"ID":"709dfdd7-f928-4f0b-8f5a-c356614219cb","Type":"ContainerStarted","Data":"6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.196325 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" event={"ID":"709dfdd7-f928-4f0b-8f5a-c356614219cb","Type":"ContainerStarted","Data":"c21f246d3c7346141ac8b248b0ffe52b2bd350112547e55d885dfd182a1359e9"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.218988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.238913 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.252901 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.267461 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.280111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.280152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.280162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.280176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.280186 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.284063 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.297375 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.313341 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.329798 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.343300 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.358399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.373241 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.383504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.383790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.383896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.383981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.384096 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.387760 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.402999 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.418079 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.429916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.486890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.486923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.486933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.486949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.486962 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.488415 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lpvsw"] Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.488836 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: E0127 14:29:42.488932 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.500768 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.514614 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.533116 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.555053 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.567014 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.585189 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.588970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.589004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.589012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.589026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.589034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.600061 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.603820 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gz87\" (UniqueName: \"kubernetes.io/projected/621bb20d-2ffa-4e89-b522-d04b4764fcc3-kube-api-access-5gz87\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.603871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.613352 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.627057 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.637570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.651107 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.664554 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.679113 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.691833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.691948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.691966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.691989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.692000 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.693366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.704520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gz87\" (UniqueName: \"kubernetes.io/projected/621bb20d-2ffa-4e89-b522-d04b4764fcc3-kube-api-access-5gz87\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.704569 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: E0127 14:29:42.704726 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:42 crc kubenswrapper[4698]: E0127 14:29:42.704807 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:43.20478843 +0000 UTC m=+38.881565895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.706836 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.721014 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.725682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gz87\" (UniqueName: \"kubernetes.io/projected/621bb20d-2ffa-4e89-b522-d04b4764fcc3-kube-api-access-5gz87\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.794071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.794188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.794209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.794233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.794349 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.897479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.897526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.897538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.897555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.897566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.964727 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:43:33.602690287 +0000 UTC Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.991518 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:42 crc kubenswrapper[4698]: E0127 14:29:42.991691 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.991762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:42 crc kubenswrapper[4698]: E0127 14:29:42.991917 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.999447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.999491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.999503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.999519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:42 crc kubenswrapper[4698]: I0127 14:29:42.999533 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:42Z","lastTransitionTime":"2026-01-27T14:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.102479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.102541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.102553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.102569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.102588 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.202688 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" event={"ID":"709dfdd7-f928-4f0b-8f5a-c356614219cb","Type":"ContainerStarted","Data":"5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.203954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.203995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.204005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.204019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.204030 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.205391 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/0.log" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.209401 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.209961 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.215335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:43 crc kubenswrapper[4698]: E0127 14:29:43.215489 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:43 crc kubenswrapper[4698]: E0127 14:29:43.215583 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:44.215560201 +0000 UTC m=+39.892337736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.219414 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.235346 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.250339 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.266872 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.285273 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.299908 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.306584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.306625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.306688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.306711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.306719 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.313724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.328860 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.348966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.361228 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.375908 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.391039 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.406685 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.408976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.409020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.409033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.409052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.409066 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.419099 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.436905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.450129 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.462895 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.475195 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.488488 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.508596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.511457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.511522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.511534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.511549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.511561 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.521776 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.538329 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.553914 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.568627 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.581304 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.592083 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.608437 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.614072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.614108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.614156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.614175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.614186 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.624601 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.638772 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.651215 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.663220 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.679528 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.716258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.716327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.716340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.716356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.716367 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.818759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.818821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.818833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.818858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.818876 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.921145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.921203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.921214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.921233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.921249 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:43Z","lastTransitionTime":"2026-01-27T14:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.957539 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.965397 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:41:39.38789548 +0000 UTC Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.972712 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.987699 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.991415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:43 crc kubenswrapper[4698]: I0127 14:29:43.991430 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:43 crc kubenswrapper[4698]: E0127 14:29:43.991599 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:43 crc kubenswrapper[4698]: E0127 14:29:43.991525 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.007744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.018338 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.023481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.023518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.023528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.023543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.023554 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.031979 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.042510 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.055383 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.068140 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.080584 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.090933 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.103270 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.116033 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.126961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.127251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.127266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.127285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.127297 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.131298 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.142694 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.156108 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.166514 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.215876 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/1.log" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.216576 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/0.log" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.219444 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec" exitCode=1 Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.219546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.219618 4698 scope.go:117] "RemoveContainer" containerID="d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.220182 4698 scope.go:117] "RemoveContainer" containerID="acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec" Jan 27 14:29:44 crc kubenswrapper[4698]: E0127 14:29:44.220345 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.227283 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:44 crc kubenswrapper[4698]: E0127 14:29:44.227487 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:44 crc kubenswrapper[4698]: E0127 14:29:44.227568 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:46.22754856 +0000 UTC m=+41.904326195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.229131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.229171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.229185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.229204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.229219 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.234763 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.247947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.262009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.274178 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.293318 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.306374 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.321893 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.333408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.333667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.333764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.333851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.333991 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.337206 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.349853 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.363952 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.376262 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.386470 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.399891 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.413328 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.426785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.436396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.436445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.436458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.436473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.436486 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.441034 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.538911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.538977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.538988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.539005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.539018 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.641879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.642114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.642229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.642320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.642400 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.745905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.745964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.745973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.745990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.745999 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.849228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.849365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.849385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.849404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.849418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.951902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.952150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.952224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.952300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.952387 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:44Z","lastTransitionTime":"2026-01-27T14:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.966390 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:50:17.380481538 +0000 UTC Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.991834 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:44 crc kubenswrapper[4698]: I0127 14:29:44.991864 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:44 crc kubenswrapper[4698]: E0127 14:29:44.991991 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:44 crc kubenswrapper[4698]: E0127 14:29:44.992110 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.006878 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.021337 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.034801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.047302 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.055464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.055749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.055839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.055912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.055976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.061670 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.081098 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.093356 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.109154 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.126960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c9782c1425d3337bda4ba4d775ed4c8cf828f4a58294f9a917b443b66e04ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"message\\\":\\\"ions/factory.go:140\\\\nI0127 14:29:40.787593 5952 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.787788 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788035 5952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788309 5952 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:29:40.788538 5952 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:29:40.788960 5952 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 14:29:40.789010 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 14:29:40.789019 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 14:29:40.789094 5952 factory.go:656] Stopping watch factory\\\\nI0127 14:29:40.789105 5952 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:29:40.789160 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:29:40.789182 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.152364 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.158555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.158590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.158600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.158618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.158646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.177879 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.190253 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.205703 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.220365 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.225929 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/1.log" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.231151 4698 scope.go:117] "RemoveContainer" containerID="acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec" Jan 27 14:29:45 crc kubenswrapper[4698]: E0127 14:29:45.231301 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.236693 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.248916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.260796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.260849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.260858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.260873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.260883 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.264522 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.276998 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.288537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.304847 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.325616 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.338300 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.352096 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.363023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.363064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.363075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.363089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.363100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.367668 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.378801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.389082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.401498 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.414766 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.428740 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.441947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.455841 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.465873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.465909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.465919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.465935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.465946 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.467864 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.568535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.568571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.568617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.568631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.568667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.671101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.671139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.671152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.671167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.671178 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.774089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.774130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.774142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.774160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.774172 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.876391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.876434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.876445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.876460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.876470 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.966991 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:25:45.150299299 +0000 UTC Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.978933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.978974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.978995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.979012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.979022 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:45Z","lastTransitionTime":"2026-01-27T14:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.991652 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:45 crc kubenswrapper[4698]: I0127 14:29:45.991710 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:45 crc kubenswrapper[4698]: E0127 14:29:45.991806 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:45 crc kubenswrapper[4698]: E0127 14:29:45.991871 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.080829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.081151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.081163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.081178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.081190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.183252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.183291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.183302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.183317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.183328 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.249881 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:46 crc kubenswrapper[4698]: E0127 14:29:46.250114 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:46 crc kubenswrapper[4698]: E0127 14:29:46.250242 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:50.250219365 +0000 UTC m=+45.926996880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.286514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.286558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.286570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.286587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.286599 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.390426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.390474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.390485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.390502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.390519 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.493554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.493617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.493629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.493677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.493692 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.596987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.597029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.597039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.597054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.597063 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.700235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.700305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.700320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.700340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.700355 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.803722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.803787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.803799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.803818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.803833 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.905893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.905930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.905941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.905958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.905970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:46Z","lastTransitionTime":"2026-01-27T14:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.967439 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:57:46.179044234 +0000 UTC Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.991704 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:46 crc kubenswrapper[4698]: I0127 14:29:46.991801 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:46 crc kubenswrapper[4698]: E0127 14:29:46.991934 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:46 crc kubenswrapper[4698]: E0127 14:29:46.992058 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.009365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.009416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.009430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.009469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.009484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.113295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.113416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.113430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.113452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.113465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.215325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.215357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.215365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.215378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.215390 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.318355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.318408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.318423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.318444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.318455 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.421350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.421401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.421415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.421431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.421444 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.523955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.524007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.524020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.524037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.524050 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.626682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.626733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.626768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.626786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.626796 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.729256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.729318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.729335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.729359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.729376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.831350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.831377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.831384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.831401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.831409 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.933776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.933814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.933825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.933839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.933853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:47Z","lastTransitionTime":"2026-01-27T14:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.968480 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:50:40.885768433 +0000 UTC Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.991916 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:47 crc kubenswrapper[4698]: I0127 14:29:47.991976 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:47 crc kubenswrapper[4698]: E0127 14:29:47.992047 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:47 crc kubenswrapper[4698]: E0127 14:29:47.992118 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.042367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.042417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.042428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.042444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.042455 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.146016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.146052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.146062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.146077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.146091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.249032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.249090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.249099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.249113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.249122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.351233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.351286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.351299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.351318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.351330 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.454888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.454940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.454952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.454967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.454977 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.558016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.558060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.558072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.558087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.558099 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.661021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.661090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.661100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.661120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.661133 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.763246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.763318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.763335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.763358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.763376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.865945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.865999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.866011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.866028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.866040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968277 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:48Z","lastTransitionTime":"2026-01-27T14:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.968784 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:25:20.667186035 +0000 UTC Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.991956 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:48 crc kubenswrapper[4698]: I0127 14:29:48.991990 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:48 crc kubenswrapper[4698]: E0127 14:29:48.992116 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:48 crc kubenswrapper[4698]: E0127 14:29:48.992225 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.071252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.071325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.071340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.071363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.071378 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.174596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.174664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.174698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.174716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.174729 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.277651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.277699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.277716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.277732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.277743 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.379933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.379974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.379985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.380000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.380013 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.482679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.482726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.482740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.482757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.482771 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.585528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.585573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.585585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.585605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.585617 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.689391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.689461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.689475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.689517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.689536 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.791986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.792024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.792034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.792049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.792061 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.895003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.895032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.895040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.895052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.895060 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.969347 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:04:47.762302133 +0000 UTC Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.991236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.991281 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:49 crc kubenswrapper[4698]: E0127 14:29:49.991376 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:49 crc kubenswrapper[4698]: E0127 14:29:49.991472 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.997059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.997098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.997107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.997123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:49 crc kubenswrapper[4698]: I0127 14:29:49.997132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:49Z","lastTransitionTime":"2026-01-27T14:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.100273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.100408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.100432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.100463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.100484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.203153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.203197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.203207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.203226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.203237 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.294499 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:50 crc kubenswrapper[4698]: E0127 14:29:50.294713 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:50 crc kubenswrapper[4698]: E0127 14:29:50.294783 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:58.294764477 +0000 UTC m=+53.971541942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.307051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.307102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.307111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.307129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.307141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.409599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.409667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.409681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.409696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.409709 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.512375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.512418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.512429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.512447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.512458 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.614813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.614850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.614857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.614870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.614879 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.716989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.717035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.717047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.717062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.717073 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.819912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.819953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.819963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.819977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.819988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.922119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.922174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.922186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.922202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.922216 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:50Z","lastTransitionTime":"2026-01-27T14:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.969862 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:38:04.9524298 +0000 UTC Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.991160 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:50 crc kubenswrapper[4698]: I0127 14:29:50.991234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:50 crc kubenswrapper[4698]: E0127 14:29:50.991288 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:50 crc kubenswrapper[4698]: E0127 14:29:50.991347 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.024904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.024950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.024966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.024986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.024999 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.127496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.127551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.127565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.127586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.127602 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.229939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.230022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.230047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.230072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.230088 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.333488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.333559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.333571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.333602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.333615 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.436698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.436753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.436762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.436776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.436786 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.484919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.484976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.484985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.485002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.485014 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.500797 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:51Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.512424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.512504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.512517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.512540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.512555 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.525629 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:51Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.529820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.529854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.529863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.529880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.529892 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.542408 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:51Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.547713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.547774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.547788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.547809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.547825 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.561047 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:51Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.564552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.564589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.564600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.564615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.564626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.576593 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:51Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.576723 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.578507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.578536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.578545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.578560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.578570 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.680950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.680993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.681002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.681021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.681036 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.783564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.783591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.783599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.783612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.783621 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.885542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.885584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.885592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.885608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.885618 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.970499 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:41:54.848179375 +0000 UTC Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.987412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.987468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.987483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.987505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.987522 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:51Z","lastTransitionTime":"2026-01-27T14:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.991789 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:51 crc kubenswrapper[4698]: I0127 14:29:51.991806 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.992013 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:51 crc kubenswrapper[4698]: E0127 14:29:51.992181 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.090611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.090681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.090697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.090726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.090739 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.193881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.193950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.193966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.193996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.194008 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.296389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.296445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.296459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.296480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.296494 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.398294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.398357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.398377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.398393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.398406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.501252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.501291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.501303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.501318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.501331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.604615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.604697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.604954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.604984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.605031 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.707910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.707962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.707974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.707992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.708007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.810871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.810929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.810946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.810970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.810987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.913111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.913197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.913219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.913242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.913259 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:52Z","lastTransitionTime":"2026-01-27T14:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.971354 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:49:19.062609768 +0000 UTC Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.991857 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:52 crc kubenswrapper[4698]: I0127 14:29:52.991943 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:52 crc kubenswrapper[4698]: E0127 14:29:52.992005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:52 crc kubenswrapper[4698]: E0127 14:29:52.992091 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.015744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.015783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.015795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.015811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.015822 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.118218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.118250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.118259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.118274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.118287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.222578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.222667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.222679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.222695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.222722 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.326378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.326439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.326457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.326478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.326490 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.429550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.429606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.429618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.429676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.429690 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.532258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.532328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.532341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.532362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.532374 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.634681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.634720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.634731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.634745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.634757 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.736856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.736887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.736895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.736907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.736916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.839386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.839421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.839432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.839448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.839459 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.942151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.942202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.942214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.942228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.942238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:53Z","lastTransitionTime":"2026-01-27T14:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.971736 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:21:37.089728945 +0000 UTC Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.992063 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:53 crc kubenswrapper[4698]: I0127 14:29:53.992131 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:53 crc kubenswrapper[4698]: E0127 14:29:53.992226 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:53 crc kubenswrapper[4698]: E0127 14:29:53.992366 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.044690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.044762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.044775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.044795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.044811 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.147543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.147598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.147606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.147623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.147655 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.249832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.249877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.249886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.249901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.249911 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.351896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.351943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.351954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.351970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.351982 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.454128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.454186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.454205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.454226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.454240 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.557207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.557264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.557276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.557300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.557312 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.660681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.660742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.660754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.660773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.660787 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.763864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.763923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.763932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.763949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.763958 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.866566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.866650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.866667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.866688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.866700 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.970302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.970368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.970381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.970403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.970416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:54Z","lastTransitionTime":"2026-01-27T14:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.972397 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:35:25.174823262 +0000 UTC Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.991822 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:54 crc kubenswrapper[4698]: I0127 14:29:54.991880 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:54 crc kubenswrapper[4698]: E0127 14:29:54.991968 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:54 crc kubenswrapper[4698]: E0127 14:29:54.992083 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.007378 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.020579 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.035239 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.054302 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.072230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.072275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.072320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.072347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.072363 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.080675 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.092009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.103933 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.117082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.131256 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.145773 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.160257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.174521 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.175299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.175351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.175365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.175385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.175398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.191943 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.209044 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.222818 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.235876 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:55Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.277838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.277876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.277886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.277901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.277911 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.379694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.379740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.379751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.379768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.379779 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.482503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.482544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.482563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.482579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.482591 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.585551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.585589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.585599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.585615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.585625 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.687974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.688031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.688047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.688068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.688085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.790771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.790821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.790834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.790851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.790864 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.893855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.893899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.893907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.893921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.893930 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.973486 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:11:53.326162531 +0000 UTC Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.991866 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.991951 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:55 crc kubenswrapper[4698]: E0127 14:29:55.992027 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:55 crc kubenswrapper[4698]: E0127 14:29:55.992247 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.992905 4698 scope.go:117] "RemoveContainer" containerID="acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.995716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.995744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.995753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.995768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:55 crc kubenswrapper[4698]: I0127 14:29:55.995780 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:55Z","lastTransitionTime":"2026-01-27T14:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.098983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.099058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.099082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.099112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.099136 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.202073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.202128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.202139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.202157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.202176 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.304212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.304492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.304569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.304665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.304770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.407310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.407359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.407370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.407386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.407401 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.509798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.509835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.509861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.509876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.509886 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.612295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.612389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.612402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.612417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.612429 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.714653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.714689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.714698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.714711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.714720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.766393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.766574 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:30:28.766546291 +0000 UTC m=+84.443323756 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.817464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.817570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.817585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.817611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.817626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.867296 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.867336 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.867355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.867383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867474 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867520 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:28.867505196 +0000 UTC m=+84.544282661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867518 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867594 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867618 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:28.867593678 +0000 UTC m=+84.544371223 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867661 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867679 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867594 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867734 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:28.867718121 +0000 UTC m=+84.544495586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867896 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.867939 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.868065 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:28.86803752 +0000 UTC m=+84.544814985 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.920073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.920107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.920117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.920132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.920142 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:56Z","lastTransitionTime":"2026-01-27T14:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.974454 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:30:16.032838615 +0000 UTC Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.991329 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:56 crc kubenswrapper[4698]: I0127 14:29:56.991409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.991476 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:56 crc kubenswrapper[4698]: E0127 14:29:56.991608 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.021834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.021876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.021885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.021900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.021910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.124016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.124061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.124072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.124088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.124099 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.226524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.226581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.226593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.226607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.226617 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.268155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/1.log" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.271101 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.329404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.329449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.329458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.329472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.329482 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.431524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.431561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.431569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.431583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.431593 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.533882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.533921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.533931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.533946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.533957 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.635940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.635980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.635990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.636004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.636012 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.738485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.738525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.738534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.738551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.738564 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.841070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.841103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.841112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.841125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.841134 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.943389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.943421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.943449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.943463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.943472 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:57Z","lastTransitionTime":"2026-01-27T14:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.977068 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:12:36.516285514 +0000 UTC Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.992106 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:57 crc kubenswrapper[4698]: I0127 14:29:57.992184 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:57 crc kubenswrapper[4698]: E0127 14:29:57.992256 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:57 crc kubenswrapper[4698]: E0127 14:29:57.992321 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.046598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.046663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.046675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.046695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.046709 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.149032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.149065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.149074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.149088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.149105 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.251420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.251462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.251471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.251485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.251493 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.275879 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.289124 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.302283 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.317316 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.330719 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.349417 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.353971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.354010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.354022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.354037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.354048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.368937 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.379191 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:58 crc kubenswrapper[4698]: E0127 14:29:58.379387 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:58 crc kubenswrapper[4698]: E0127 14:29:58.379473 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:14.379456372 +0000 UTC m=+70.056233837 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.393235 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.405372 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.420595 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.431785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.445291 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.456684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.456738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.456750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.456769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.456781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.460960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.471694 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.480803 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.494291 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.505280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.559140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.559180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.559191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.559207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.559218 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.661972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.662012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.662021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.662035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.662044 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.765130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.765189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.765203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.765267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.765282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.867294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.867369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.867387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.867413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.867437 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.943808 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.953864 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.960966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.969523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.969560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.969568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.969614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.969627 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:58Z","lastTransitionTime":"2026-01-27T14:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.977338 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:59:55.029204587 +0000 UTC Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.977540 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.991596 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:29:58 crc kubenswrapper[4698]: E0127 14:29:58.991747 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.992162 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:29:58 crc kubenswrapper[4698]: E0127 14:29:58.992232 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:29:58 crc kubenswrapper[4698]: I0127 14:29:58.993144 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.006005 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.017381 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.032694 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.048622 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.067257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.071735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.071771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.071780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.071796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.071807 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.083667 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.095338 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.106556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.118468 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.128756 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.140042 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.156177 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.173615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.173665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.173675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.173687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.173696 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.174949 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.276278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.276333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.276351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.276374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.276441 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.279559 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/2.log" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.280315 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/1.log" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.283191 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" exitCode=1 Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.283257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.283333 4698 scope.go:117] "RemoveContainer" containerID="acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.283754 4698 scope.go:117] "RemoveContainer" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" Jan 27 14:29:59 crc kubenswrapper[4698]: E0127 14:29:59.283902 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.301106 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.314556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.327430 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.340778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.351325 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.362901 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.379112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.379426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.379439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.379457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.379469 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.381672 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.399611 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf945703822364baf4ee29480af06c0936afa4feddc6f7cba2a901e15a56fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"message\\\":\\\"ol-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:43.511136 6165 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511153 6165 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nI0127 14:29:43.511162 6165 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vg6nd in node crc\\\\nI0127 14:29:43.511169 6165 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vg6nd after 0 failed attempt(s)\\\\nI0127 14:29:43.511175 6165 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vg6nd\\\\nF0127 14:29:43.511046 6165 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.410941 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.422324 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.434298 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.444933 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.457009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.467455 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.479330 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.481982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.482018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.482031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.482049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.482061 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.512515 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.531973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.584347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.584383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.584391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.584440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.584452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.686812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.686861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.686871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.686889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.686912 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.789431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.789500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.789519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.789543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.789564 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.892165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.892288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.892311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.892336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.892355 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.977993 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:26:05.855905773 +0000 UTC Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.991414 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:29:59 crc kubenswrapper[4698]: E0127 14:29:59.991568 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.991618 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:29:59 crc kubenswrapper[4698]: E0127 14:29:59.991832 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.994420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.994471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.994483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.994495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:29:59 crc kubenswrapper[4698]: I0127 14:29:59.994582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:29:59Z","lastTransitionTime":"2026-01-27T14:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.097033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.097069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.097085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.097102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.097112 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.199856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.199896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.199906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.199920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.199930 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.287410 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/2.log" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.292757 4698 scope.go:117] "RemoveContainer" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" Jan 27 14:30:00 crc kubenswrapper[4698]: E0127 14:30:00.293118 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.302256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.302292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.302303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.302318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.302330 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.306435 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.315982 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.327921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.344100 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.362381 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.374910 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.387406 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.399546 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.404266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.404298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.404306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.404321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.404329 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.411477 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.422086 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.430961 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.447026 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.463617 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.475323 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.486149 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.497624 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.506509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.506569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.506580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.506594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.506605 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.509752 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.608745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.608789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.608800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.608815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.608824 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.711385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.711424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.711435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.711450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.711462 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.814314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.814358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.814370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.814387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.814399 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.916913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.916955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.916966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.916982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.916993 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:00Z","lastTransitionTime":"2026-01-27T14:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.979071 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:53:03.191924722 +0000 UTC Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.991681 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:00 crc kubenswrapper[4698]: I0127 14:30:00.991756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:00 crc kubenswrapper[4698]: E0127 14:30:00.991822 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:00 crc kubenswrapper[4698]: E0127 14:30:00.991959 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.019619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.019686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.019699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.019718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.019729 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.122449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.122505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.122517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.122541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.122557 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.224973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.225023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.225035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.225052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.225065 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.328323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.328356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.328365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.328380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.328388 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.430984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.431024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.431033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.431047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.431056 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.533745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.533791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.533800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.533815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.533826 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.636066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.636109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.636122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.636139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.636149 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.727886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.727994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.728019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.728049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.728070 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.741865 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.745697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.745734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.745743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.745758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.745768 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.760851 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.764113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.764157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.764180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.764196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.764208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.778403 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.781408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.781525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.781591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.781677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.781739 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.793602 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.797470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.797587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.797686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.797770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.797851 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.809970 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.810342 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.811814 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.811916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.811992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.812056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.812123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.914391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.914446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.914458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.914477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.914490 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:01Z","lastTransitionTime":"2026-01-27T14:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.979308 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:18:04.475587622 +0000 UTC Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.991613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.991778 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:01 crc kubenswrapper[4698]: I0127 14:30:01.992024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:01 crc kubenswrapper[4698]: E0127 14:30:01.992228 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.016280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.016331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.016347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.016377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.016395 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.123044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.123113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.123126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.123147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.123161 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.226200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.226251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.226261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.226276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.226287 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.329741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.329796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.329826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.329848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.329861 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.432796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.432849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.432860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.432875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.432884 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.535718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.535771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.535784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.535803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.535817 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.639630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.639704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.639717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.639742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.639758 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.741925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.741961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.741972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.741991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.742004 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.843836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.843879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.843890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.843905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.843916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.947195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.947241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.947253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.947272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.947282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:02Z","lastTransitionTime":"2026-01-27T14:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.980970 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:25:16.499121277 +0000 UTC Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.991368 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:02 crc kubenswrapper[4698]: I0127 14:30:02.991457 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:02 crc kubenswrapper[4698]: E0127 14:30:02.991509 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:02 crc kubenswrapper[4698]: E0127 14:30:02.991619 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.050595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.050668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.050686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.050706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.050721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.153576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.153621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.153630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.153658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.153668 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.256371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.256418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.256428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.256445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.256456 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.358686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.358731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.358742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.358758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.358770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.462146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.462185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.462194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.462209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.462218 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.565551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.565603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.565612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.565630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.565655 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.668449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.668487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.668497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.668513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.668525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.771274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.771336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.771347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.771364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.771376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.873389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.873441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.873453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.873469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.873481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.976063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.976111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.976123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.976141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.976152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:03Z","lastTransitionTime":"2026-01-27T14:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.981471 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:32:47.930538788 +0000 UTC Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.991521 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:03 crc kubenswrapper[4698]: E0127 14:30:03.991666 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:03 crc kubenswrapper[4698]: I0127 14:30:03.991521 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:03 crc kubenswrapper[4698]: E0127 14:30:03.991885 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.079460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.079522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.079539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.079560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.079582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.181811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.181866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.181878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.181898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.181911 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.284732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.284766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.284775 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.284795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.284804 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.386675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.386713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.386723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.386737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.386747 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.489133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.489180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.489190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.489205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.489217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.592166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.592208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.592216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.592229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.592238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.694311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.694339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.694348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.694360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.694381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.797203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.797261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.797274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.797298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.797312 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.900240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.900297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.900311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.900332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.900342 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:04Z","lastTransitionTime":"2026-01-27T14:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.981552 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:38:44.953459979 +0000 UTC Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.991364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:04 crc kubenswrapper[4698]: I0127 14:30:04.991367 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:04 crc kubenswrapper[4698]: E0127 14:30:04.991492 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:04 crc kubenswrapper[4698]: E0127 14:30:04.991607 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.002384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.002426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.002437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.002452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.002464 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.007088 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.019082 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.029474 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.043322 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.053817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.070869 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.081331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.091035 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105549 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.105818 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.125679 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.136981 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.147659 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.159205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.170856 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.181592 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.193724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.205771 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:05Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.209760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.209793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.209804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.209819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.209833 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.311743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.312036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.312045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.312057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.312065 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.414333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.414387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.414400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.414450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.414463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.516685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.516770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.516782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.516817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.516831 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.619325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.619389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.619413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.619439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.619459 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.722177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.722218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.722227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.722239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.722248 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.825363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.825428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.825450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.825478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.825499 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.928445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.928523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.928545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.928573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.928599 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:05Z","lastTransitionTime":"2026-01-27T14:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.982473 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:12:31.105192197 +0000 UTC Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.991868 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:05 crc kubenswrapper[4698]: I0127 14:30:05.991900 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:05 crc kubenswrapper[4698]: E0127 14:30:05.992035 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:05 crc kubenswrapper[4698]: E0127 14:30:05.992106 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.030801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.030842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.030857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.030875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.030889 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.133256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.133301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.133313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.133331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.133345 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.236470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.236507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.236515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.236530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.236539 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.340004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.340313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.340343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.340363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.340374 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.443478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.443542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.443553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.443580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.443597 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.546621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.546684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.546693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.546706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.546718 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.649329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.649406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.649417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.649433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.649444 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.751857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.751912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.751928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.751950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.751966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.854769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.854811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.854823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.854840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.854852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.958106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.958146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.958157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.958208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.958220 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:06Z","lastTransitionTime":"2026-01-27T14:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.983056 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:17:27.227321561 +0000 UTC Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.991380 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:06 crc kubenswrapper[4698]: I0127 14:30:06.991477 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:06 crc kubenswrapper[4698]: E0127 14:30:06.991530 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:06 crc kubenswrapper[4698]: E0127 14:30:06.991598 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.060837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.061053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.061380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.061434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.061477 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.163979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.164026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.164039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.164053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.164064 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.266832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.266867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.266876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.266887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.266896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.369880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.369922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.369933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.369950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.369961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.471973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.472011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.472023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.472036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.472045 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.574570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.575012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.575155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.575300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.575441 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.678847 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.679085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.679194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.679268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.679333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.781836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.781889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.781901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.781919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.781932 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.884386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.884433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.884446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.884461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.884472 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.983746 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:55:42.275145771 +0000 UTC Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.986424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.986450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.986458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.986470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.986479 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:07Z","lastTransitionTime":"2026-01-27T14:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.991775 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:07 crc kubenswrapper[4698]: I0127 14:30:07.991820 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:07 crc kubenswrapper[4698]: E0127 14:30:07.991868 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:07 crc kubenswrapper[4698]: E0127 14:30:07.991924 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.089137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.089185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.089197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.089214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.089225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.191877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.191923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.191933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.191950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.191960 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.294493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.294544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.294553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.294568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.294580 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.396717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.396768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.396777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.396792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.396801 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.499124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.499174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.499186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.499205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.499217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.601436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.601478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.601488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.601504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.601516 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.703610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.703664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.703674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.703690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.703699 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.805933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.805981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.805994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.806012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.806025 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.909097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.909185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.909199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.909238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.909252 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:08Z","lastTransitionTime":"2026-01-27T14:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.984297 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:04:35.078739856 +0000 UTC Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.991591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:08 crc kubenswrapper[4698]: E0127 14:30:08.991734 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:08 crc kubenswrapper[4698]: I0127 14:30:08.991842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:08 crc kubenswrapper[4698]: E0127 14:30:08.991998 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.012712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.012748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.012758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.012771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.012781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.115734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.115793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.115812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.115834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.115849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.218241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.218279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.218287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.218301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.218311 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.320027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.320068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.320099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.320115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.320124 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.422745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.422778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.422788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.422804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.422815 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.524479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.524518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.524529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.524547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.524558 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.627169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.627221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.627235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.627254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.627265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.729183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.729384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.729440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.729511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.729592 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.835031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.835115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.835132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.835150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.835203 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.938382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.938429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.938439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.938455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.938465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:09Z","lastTransitionTime":"2026-01-27T14:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.984841 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:12:12.914025682 +0000 UTC Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.991852 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:09 crc kubenswrapper[4698]: I0127 14:30:09.991942 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:09 crc kubenswrapper[4698]: E0127 14:30:09.991995 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:09 crc kubenswrapper[4698]: E0127 14:30:09.992147 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.040598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.040666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.040678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.040692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.040701 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.142816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.142873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.142884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.142898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.142907 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.244578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.244616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.244627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.244664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.244676 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.347363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.347396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.347404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.347416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.347425 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.449303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.449345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.449357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.449373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.449383 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.551524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.551591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.551603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.551620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.551658 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.653532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.653570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.653580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.653594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.653604 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.755852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.755895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.755906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.755924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.755938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.858299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.858365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.858379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.858394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.858407 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.960359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.960416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.960427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.960441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.960452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:10Z","lastTransitionTime":"2026-01-27T14:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.985804 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:43:29.68917128 +0000 UTC Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.991383 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:10 crc kubenswrapper[4698]: I0127 14:30:10.991456 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:10 crc kubenswrapper[4698]: E0127 14:30:10.991520 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:10 crc kubenswrapper[4698]: E0127 14:30:10.991582 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.062828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.062862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.062871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.062883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.062892 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.166170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.166206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.166216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.166229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.166238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.268806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.268852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.268862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.268877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.268887 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.371661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.371714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.371727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.371745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.371758 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.473680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.473719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.473732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.473749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.473761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.576273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.576323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.576335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.576352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.576364 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.678363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.678399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.678408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.678421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.678430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.780631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.780685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.780696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.780712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.780723 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.883732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.883768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.883776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.883789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.883798 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986008 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:10:00.564924022 +0000 UTC Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.986580 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.991861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.991886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:11 crc kubenswrapper[4698]: E0127 14:30:11.991977 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:11 crc kubenswrapper[4698]: E0127 14:30:11.992455 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.992688 4698 scope.go:117] "RemoveContainer" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" Jan 27 14:30:11 crc kubenswrapper[4698]: E0127 14:30:11.992815 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.998834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.998869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.998881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.998899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:11 crc kubenswrapper[4698]: I0127 14:30:11.998910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:11Z","lastTransitionTime":"2026-01-27T14:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.002989 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.011968 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.015914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.015960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.015971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.015986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.015997 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.031412 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.034527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.034558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.034567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.034580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.034589 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.045007 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.047910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.047961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.047974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.047989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.048000 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.060326 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.063548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.063703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.063732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.063746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.063755 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.073875 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.074024 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.089117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.089156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.089164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.089178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.089189 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.192836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.192895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.192912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.192935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.192951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.295311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.295359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.295370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.295384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.295393 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.397549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.397586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.397599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.397613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.397624 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.500423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.500462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.500473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.500493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.500506 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.602968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.603016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.603028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.603045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.603056 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.705443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.705493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.705503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.705517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.705526 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.808056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.808113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.808124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.808141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.808155 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.910758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.910800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.910810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.910828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.910839 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:12Z","lastTransitionTime":"2026-01-27T14:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.986396 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:40:34.298780388 +0000 UTC Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.991856 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:12 crc kubenswrapper[4698]: I0127 14:30:12.991903 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.992019 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:12 crc kubenswrapper[4698]: E0127 14:30:12.992247 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.013772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.013820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.013832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.013851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.013862 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.116266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.116305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.116314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.116328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.116338 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.219147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.219184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.219195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.219208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.219216 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.322726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.322791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.322801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.322852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.322872 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.425966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.426001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.426014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.426029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.426040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.528106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.528135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.528143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.528156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.528166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.630088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.630138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.630153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.630171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.630181 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.732411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.732462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.732475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.732492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.732504 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.835049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.835099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.835111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.835129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.835141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.937481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.937519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.937532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.937548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.937562 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:13Z","lastTransitionTime":"2026-01-27T14:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.986857 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:33:36.265925641 +0000 UTC Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.991074 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:13 crc kubenswrapper[4698]: I0127 14:30:13.991119 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:13 crc kubenswrapper[4698]: E0127 14:30:13.991205 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:13 crc kubenswrapper[4698]: E0127 14:30:13.991300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.040342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.040384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.040395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.040414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.040427 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.143020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.143119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.143137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.143154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.143166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.245400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.245459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.245472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.245488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.245500 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.348013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.348056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.348070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.348085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.348097 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.448457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:14 crc kubenswrapper[4698]: E0127 14:30:14.448692 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:30:14 crc kubenswrapper[4698]: E0127 14:30:14.448794 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:46.448770592 +0000 UTC m=+102.125548127 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.450098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.450135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.450150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.450165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.450176 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.552905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.552940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.552948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.552961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.552972 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.655983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.656032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.656045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.656061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.656072 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.758495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.758532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.758543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.758560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.758575 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.860918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.860957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.860968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.860984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.860995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.962947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.962982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.962993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.963009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.963033 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:14Z","lastTransitionTime":"2026-01-27T14:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.987831 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:40:42.588044798 +0000 UTC Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.991172 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:14 crc kubenswrapper[4698]: I0127 14:30:14.991261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:14 crc kubenswrapper[4698]: E0127 14:30:14.991334 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:14 crc kubenswrapper[4698]: E0127 14:30:14.991388 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.004809 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.018562 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.029618 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.043304 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.053374 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.063908 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.066135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.066173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.066187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.066207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.066221 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.081069 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.102284 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.112753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.125416 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.135472 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.147224 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.157956 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168493 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168563 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.168880 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.182576 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.195313 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.208134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.222410 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:15Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.271805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.271904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.271921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.271939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.271949 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.374531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.374576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.374586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.374603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.374613 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.476924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.476973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.476986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.477007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.477018 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.579186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.579249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.579257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.579289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.579299 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.682680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.682737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.682747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.682761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.682769 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.785785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.785843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.785862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.785885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.785900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.888243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.888315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.888328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.888351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.888364 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.989087 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:52:21.504603332 +0000 UTC Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991321 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:15 crc kubenswrapper[4698]: E0127 14:30:15.991467 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:15 crc kubenswrapper[4698]: E0127 14:30:15.991774 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:15 crc kubenswrapper[4698]: I0127 14:30:15.991849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:15Z","lastTransitionTime":"2026-01-27T14:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.094338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.094448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.094806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.094904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.095145 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.197205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.197254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.197266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.197284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.197296 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.300725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.300768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.300780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.300797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.300812 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.403528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.403567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.403579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.403595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.403617 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.506147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.506188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.506205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.506223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.506234 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.609678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.609733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.609744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.609760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.609773 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.712862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.712910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.712920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.712944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.712955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.815609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.815663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.815672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.815686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.815694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.917997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.918027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.918036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.918054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.918070 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:16Z","lastTransitionTime":"2026-01-27T14:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.989843 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:35:18.746105565 +0000 UTC Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.992180 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:16 crc kubenswrapper[4698]: I0127 14:30:16.992217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:16 crc kubenswrapper[4698]: E0127 14:30:16.992318 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:16 crc kubenswrapper[4698]: E0127 14:30:16.992480 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.020421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.020461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.020471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.020485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.020495 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.123368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.123424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.123435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.123450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.123460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.226080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.226137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.226147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.226164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.226192 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.328507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.328557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.328569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.328585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.328597 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.343414 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/0.log" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.343470 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e135f0c-0c36-44f4-afeb-06994affb352" containerID="6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140" exitCode=1 Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.343506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerDied","Data":"6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.343954 4698 scope.go:117] "RemoveContainer" containerID="6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.361186 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.379952 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.401149 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.414618 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.432000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.432035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.432049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.432064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.432074 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.433808 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.446142 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.458946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.474261 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.495098 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.511704 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.526033 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.535042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.535073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.535083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.535120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.535131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.540035 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.556073 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.569029 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.582678 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.594286 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.607999 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.618629 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.637530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.637569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.637580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.637596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.637607 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.739979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.740041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.740053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.740069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.740082 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.843107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.843153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.843168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.843186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.843199 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.946233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.946274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.946286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.946303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.946314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:17Z","lastTransitionTime":"2026-01-27T14:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.990008 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:33:43.849183428 +0000 UTC Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.991368 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:17 crc kubenswrapper[4698]: I0127 14:30:17.991415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:17 crc kubenswrapper[4698]: E0127 14:30:17.991501 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:17 crc kubenswrapper[4698]: E0127 14:30:17.991587 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.048513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.048555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.048565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.048582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.048607 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.150962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.151004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.151012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.151026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.151034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.253500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.253582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.253594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.253629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.253659 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.348264 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/0.log" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.348331 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerStarted","Data":"89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.356005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.356048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.356058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.356104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.356115 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.362807 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.378870 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.392622 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.405877 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.424825 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.438099 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.452261 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.458041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.458079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.458095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.458113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.458126 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.465417 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.476680 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.495665 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.512886 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.525297 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.538053 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.551539 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.560585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.560651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.560669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.560689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.560708 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.570013 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.583672 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.598172 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.609871 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.663591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.663632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.663660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.663673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.663682 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.766130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.766191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.766203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.766238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.766250 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.868759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.868807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.868819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.868837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.868850 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.971391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.971429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.971452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.971468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.971481 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:18Z","lastTransitionTime":"2026-01-27T14:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.990337 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:47:14.464662074 +0000 UTC Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.991577 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:18 crc kubenswrapper[4698]: I0127 14:30:18.991617 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:18 crc kubenswrapper[4698]: E0127 14:30:18.991718 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:18 crc kubenswrapper[4698]: E0127 14:30:18.991864 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.074052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.074097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.074109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.074124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.074136 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.178518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.178561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.178569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.178615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.178673 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.281293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.281349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.281366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.281388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.281401 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.383360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.383398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.383410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.383427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.383440 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.486665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.486702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.486714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.486730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.486742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.588824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.588870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.588882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.588898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.588909 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.691683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.691734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.691747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.691767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.691779 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.795298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.795594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.795886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.796097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.796262 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.899054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.899094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.899102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.899116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.899127 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:19Z","lastTransitionTime":"2026-01-27T14:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.991363 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:09:16.78368088 +0000 UTC Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.991532 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:19 crc kubenswrapper[4698]: I0127 14:30:19.991532 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:19 crc kubenswrapper[4698]: E0127 14:30:19.991694 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:19 crc kubenswrapper[4698]: E0127 14:30:19.991770 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.001491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.001544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.001556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.001570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.001581 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.104306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.104353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.104367 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.104384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.104398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.206394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.206430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.206441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.206455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.206465 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.308537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.308582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.308594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.308610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.308622 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.411303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.411369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.411381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.411397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.411428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.514107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.514145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.514153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.514167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.514177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.615904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.615956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.615966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.615978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.615988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.718564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.718620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.718651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.718679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.718703 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.821426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.821508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.821524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.821546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.821558 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.923546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.923595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.923605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.923624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.923654 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:20Z","lastTransitionTime":"2026-01-27T14:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.991401 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:20 crc kubenswrapper[4698]: E0127 14:30:20.991535 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.991568 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:38:17.152472421 +0000 UTC Jan 27 14:30:20 crc kubenswrapper[4698]: I0127 14:30:20.991589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:20 crc kubenswrapper[4698]: E0127 14:30:20.991768 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.025387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.025425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.025433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.025447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.025456 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.127822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.127881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.127892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.127904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.127914 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.229943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.230179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.230270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.230343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.230407 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.332718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.332966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.333085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.333201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.333291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.435945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.436013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.436027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.436042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.436074 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.538609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.538940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.539042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.539169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.539254 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.642104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.642145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.642213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.642230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.642244 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.744713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.744965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.745051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.745135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.745222 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.847652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.847688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.847698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.847711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.847720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.949684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.949713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.949722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.949733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.949742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:21Z","lastTransitionTime":"2026-01-27T14:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.991798 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.991889 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:47:14.196969589 +0000 UTC Jan 27 14:30:21 crc kubenswrapper[4698]: E0127 14:30:21.991963 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:21 crc kubenswrapper[4698]: I0127 14:30:21.992002 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:21 crc kubenswrapper[4698]: E0127 14:30:21.992122 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.052560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.052616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.052632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.052677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.052690 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.108440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.108489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.108501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.108517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.108528 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.120994 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.125263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.125347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.125363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.125387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.125402 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.137462 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.141697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.141739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.141751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.141779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.141793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.154688 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.159142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.159199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.159223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.159249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.159262 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.173967 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.178317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.178370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.178381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.178400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.178411 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.191792 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.191980 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.193896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.193954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.193970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.193995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.194009 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.296681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.296733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.296744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.296761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.296771 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.399835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.399899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.399909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.399924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.399935 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.504029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.504065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.504077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.504093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.504105 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.606444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.606502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.606514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.606530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.606544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.708984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.709021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.709032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.709047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.709060 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.811437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.811682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.811702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.811718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.811732 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.914155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.914192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.914202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.914217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.914228 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:22Z","lastTransitionTime":"2026-01-27T14:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.991762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.991793 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.992011 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:22 crc kubenswrapper[4698]: I0127 14:30:22.992279 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:03:54.744598126 +0000 UTC Jan 27 14:30:22 crc kubenswrapper[4698]: E0127 14:30:22.992296 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.017035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.017095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.017109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.017165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.017181 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.119313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.119361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.119374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.119392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.119407 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.222075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.222137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.222148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.222161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.222171 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.325028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.325065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.325074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.325089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.325100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.426860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.426924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.426937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.426953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.426964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.529259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.529293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.529302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.529315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.529324 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.631747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.631782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.631790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.631802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.631811 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.733747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.733797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.733810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.733830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.733842 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.836190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.836231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.836240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.836264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.836274 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.938829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.938871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.938882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.938899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.938910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:23Z","lastTransitionTime":"2026-01-27T14:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.992788 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:29:16.713700651 +0000 UTC Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.992987 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:23 crc kubenswrapper[4698]: E0127 14:30:23.993104 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:23 crc kubenswrapper[4698]: I0127 14:30:23.992987 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:23 crc kubenswrapper[4698]: E0127 14:30:23.993198 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.040768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.040811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.040822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.040837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.040849 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.142894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.142961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.142983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.143011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.143032 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.245449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.245486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.245496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.245508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.245517 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.347757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.347824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.347836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.347852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.347863 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.449737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.449779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.449794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.449815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.449824 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.551861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.551907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.551916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.551930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.551940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.654327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.654377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.654388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.654404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.654415 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.757269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.757318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.757330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.757348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.757359 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.859317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.859371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.859384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.859404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.859418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.961483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.961530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.961541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.961557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.961569 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:24Z","lastTransitionTime":"2026-01-27T14:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.992151 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.992254 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:24 crc kubenswrapper[4698]: E0127 14:30:24.992414 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:24 crc kubenswrapper[4698]: E0127 14:30:24.992578 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:24 crc kubenswrapper[4698]: I0127 14:30:24.993308 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:01:17.881415459 +0000 UTC Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.004736 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.015621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.027945 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.042668 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.059318 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.063340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.063379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.063388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.063403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.063414 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.069950 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.084311 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.093990 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.107353 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.118266 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.127793 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.138271 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.149115 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.161599 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.165417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.165449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.165459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.165473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.165482 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.175916 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.190966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.203481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.215681 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.268031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.268081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.268090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.268104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.268115 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.369707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.369736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.369744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.369756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.369765 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.471728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.471782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.471794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.471812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.471823 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.574473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.574509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.574519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.574535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.574544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.677590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.677649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.677663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.677680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.677692 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.780053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.780108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.780121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.780135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.780148 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.883013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.883059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.883071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.883085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.883097 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.984989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.985034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.985050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.985068 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.985085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:25Z","lastTransitionTime":"2026-01-27T14:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.991402 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.991461 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:25 crc kubenswrapper[4698]: E0127 14:30:25.991565 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:25 crc kubenswrapper[4698]: E0127 14:30:25.991707 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:25 crc kubenswrapper[4698]: I0127 14:30:25.993570 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:35:13.449509852 +0000 UTC Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.087335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.087386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.087398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.087417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.087428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.189944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.189999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.190011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.190027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.190040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.291817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.291864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.291874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.291887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.291896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.394146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.394184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.394192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.394206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.394216 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.497364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.497413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.497428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.497443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.497453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.599096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.599139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.599150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.599166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.599177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.701754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.701798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.701811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.701827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.701838 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.804188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.804232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.804243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.804258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.804284 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.906161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.906200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.906211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.906228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.906241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:26Z","lastTransitionTime":"2026-01-27T14:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.991805 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.991855 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:26 crc kubenswrapper[4698]: E0127 14:30:26.992139 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:26 crc kubenswrapper[4698]: E0127 14:30:26.992240 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.992429 4698 scope.go:117] "RemoveContainer" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" Jan 27 14:30:26 crc kubenswrapper[4698]: I0127 14:30:26.993814 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:59:19.009715797 +0000 UTC Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.008142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.008186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.008197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.008212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.008222 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.110295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.110589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.110597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.110612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.110621 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.214205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.214293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.214306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.214328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.214343 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.317041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.317108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.317121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.317140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.317155 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.379077 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/2.log" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.381847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.382544 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.400976 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.414196 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.419420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.419452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.419460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.419474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.419484 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.428315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.440664 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.452730 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.462831 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.473988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.489009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.507158 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.518219 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.521610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.521671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.521685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.521700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.521711 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.531034 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.543000 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.554396 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.567030 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.577406 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.587460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.599976 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.611605 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.624012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.624051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.624067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.624082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.624091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.726630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.726688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.726698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.726713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.726724 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.829438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.829480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.829490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.829505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.829517 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.932027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.932076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.932089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.932105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.932117 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:27Z","lastTransitionTime":"2026-01-27T14:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.992160 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.992181 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:27 crc kubenswrapper[4698]: E0127 14:30:27.992325 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:27 crc kubenswrapper[4698]: E0127 14:30:27.992393 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:27 crc kubenswrapper[4698]: I0127 14:30:27.994236 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:17:54.92334353 +0000 UTC Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.035110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.035155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.035164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.035180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.035190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.137162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.137205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.137215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.137231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.137243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.239360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.239401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.239422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.239438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.239448 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.342034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.342080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.342096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.342113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.342125 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.387314 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/3.log" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.387919 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/2.log" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.390490 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" exitCode=1 Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.390544 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.390589 4698 scope.go:117] "RemoveContainer" containerID="0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.391096 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.391248 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.403703 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.415881 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.427169 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.436460 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.444119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.444160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.444169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.444185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.444195 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.450852 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.462427 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.475099 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.491525 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.506382 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.519369 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.533203 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.544716 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.546065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.546101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.546110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.546125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.546134 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.564971 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a99e5f8df86301415e9b49b4f197383e67e6a1b00cbf7e4e7f543002a0f1db4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 14:29:58.516254 6338 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516264 6338 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0127 14:29:58.516271 6338 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0127 14:29:58.516287 6338 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:29:58Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:29:58.516302 6338 obj_retry.go:303\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:27Z\\\",\\\"message\\\":\\\"ice.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0078ea847 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 14:30:27.745340 6796 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.575375 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.586941 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.598563 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.610422 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.629144 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.648021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.648060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.648071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.648087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.648100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.751011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.751096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.751108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.751126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.751140 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.788924 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.789051 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.789033869 +0000 UTC m=+148.465811334 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.852921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.852955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.852964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.852976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.852985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.889667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.889705 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.889724 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.889751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889859 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889861 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889898 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889935 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889919 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.8898996 +0000 UTC m=+148.566677065 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889949 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.889915 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.890002 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.889978923 +0000 UTC m=+148.566756468 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.890011 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.890021 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.890054 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.890013844 +0000 UTC m=+148.566791419 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.890074 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.890066715 +0000 UTC m=+148.566844260 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.955742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.955798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.955807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.955826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.955846 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:28Z","lastTransitionTime":"2026-01-27T14:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.991264 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.991274 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.991422 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:28 crc kubenswrapper[4698]: E0127 14:30:28.991491 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:28 crc kubenswrapper[4698]: I0127 14:30:28.994415 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:44:39.039664058 +0000 UTC Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.057959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.058006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.058018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.058034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.058048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.160005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.160044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.160056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.160070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.160079 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.261945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.261986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.261997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.262014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.262026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.364183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.364235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.364245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.364262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.364272 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.395820 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/3.log" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.399091 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:30:29 crc kubenswrapper[4698]: E0127 14:30:29.399309 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.413592 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.425985 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.442299 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.456453 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.466379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.466420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.466432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.466451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.466463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.471764 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.487796 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.509886 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:27Z\\\",\\\"message\\\":\\\"ice.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0078ea847 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 14:30:27.745340 6796 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:30:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.523447 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.540624 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.551774 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.564221 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.568341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.568386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.568396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.568412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.568424 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.577226 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.589871 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.601200 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.616830 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.627927 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.640516 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.656309 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.671222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.671273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.671284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.671302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.671314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.774173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.774225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.774236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.774256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.774268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.878963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.879003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.879013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.879029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.879040 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.982023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.982075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.982086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.982102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.982116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:29Z","lastTransitionTime":"2026-01-27T14:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.991508 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.991536 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:29 crc kubenswrapper[4698]: E0127 14:30:29.991825 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:29 crc kubenswrapper[4698]: E0127 14:30:29.991692 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:29 crc kubenswrapper[4698]: I0127 14:30:29.994744 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:22:04.224250681 +0000 UTC Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.084858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.084911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.084923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.084941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.084953 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.188617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.188681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.188693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.188709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.188720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.290975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.291024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.291034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.291050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.291059 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.393559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.393602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.393617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.393647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.393661 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.496407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.496452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.496468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.496486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.496498 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.599289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.599331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.599342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.599357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.599368 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.702096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.702136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.702148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.702168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.702187 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.804467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.804527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.804540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.804557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.804568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.906370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.906418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.906431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.906448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.906460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:30Z","lastTransitionTime":"2026-01-27T14:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.991773 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.991816 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:30 crc kubenswrapper[4698]: E0127 14:30:30.991927 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:30 crc kubenswrapper[4698]: E0127 14:30:30.992005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:30 crc kubenswrapper[4698]: I0127 14:30:30.994914 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:13:08.553397262 +0000 UTC Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.008037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.008074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.008091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.008105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.008116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.110716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.110759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.110770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.110786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.110797 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.213369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.213418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.213428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.213443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.213454 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.315100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.315147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.315157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.315173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.315184 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.417958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.418284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.418453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.418589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.418748 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.521191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.521236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.521248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.521266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.521277 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.623890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.623938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.623949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.623967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.623979 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.727322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.727379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.727389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.727433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.727445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.829571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.829627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.829655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.829671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.829680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.932246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.932287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.932298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.932313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.932323 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:31Z","lastTransitionTime":"2026-01-27T14:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.991543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.991543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:31 crc kubenswrapper[4698]: E0127 14:30:31.991689 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:31 crc kubenswrapper[4698]: E0127 14:30:31.991850 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:31 crc kubenswrapper[4698]: I0127 14:30:31.995698 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:49:11.211069509 +0000 UTC Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.034884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.034929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.034942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.034958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.034969 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.138665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.138807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.138828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.138851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.138877 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.241080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.241157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.241167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.241181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.241192 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.344396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.344427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.344435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.344457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.344467 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.446417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.446474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.446485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.446503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.446534 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.532470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.532515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.532524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.532539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.532552 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.552254 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.558104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.558169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.558180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.558196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.558498 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.573503 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.577578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.577622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.577651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.577668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.577680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.592055 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.596990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.597057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.597070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.597091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.597103 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.611595 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.616926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.616964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.616976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.616993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.617007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.634920 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.635039 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.637074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.637107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.637121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.637139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.637151 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.739878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.739938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.739956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.739975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.739987 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.842361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.842401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.842410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.842421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.842430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.944470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.944503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.944512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.944526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.944535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:32Z","lastTransitionTime":"2026-01-27T14:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.992277 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.992400 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.992566 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:32 crc kubenswrapper[4698]: E0127 14:30:32.992678 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:32 crc kubenswrapper[4698]: I0127 14:30:32.995792 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:48:13.389998176 +0000 UTC Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.047202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.047257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.047271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.047287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.047299 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.149456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.149530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.149544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.149560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.149572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.252270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.252303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.252311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.252324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.252333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.354507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.354562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.354575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.354591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.354602 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.457697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.457749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.457761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.457777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.457791 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.559801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.559841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.559850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.559865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.559877 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.662236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.662275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.662285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.662300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.662311 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.764043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.764085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.764094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.764112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.764122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.867042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.867089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.867100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.867116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.867127 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.969737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.969771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.969779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.969795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.969804 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:33Z","lastTransitionTime":"2026-01-27T14:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.991913 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:33 crc kubenswrapper[4698]: E0127 14:30:33.992060 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.992261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:33 crc kubenswrapper[4698]: E0127 14:30:33.992328 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:33 crc kubenswrapper[4698]: I0127 14:30:33.996493 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:44:14.506755244 +0000 UTC Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.071751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.071811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.071820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.071838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.071857 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.174379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.174411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.174419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.174431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.174440 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.276658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.276696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.276706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.276721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.276732 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.379045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.379340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.379412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.379479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.379535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.481995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.482046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.482062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.482080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.482091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.584375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.584432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.584444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.584467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.584513 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.686507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.686552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.686565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.686580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.686592 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.789587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.789652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.789722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.789785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.789798 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.891770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.891868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.891878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.891892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.891900 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.991351 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.991401 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:34 crc kubenswrapper[4698]: E0127 14:30:34.991601 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:34 crc kubenswrapper[4698]: E0127 14:30:34.991707 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.994593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.994688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.994709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.994731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.994747 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:34Z","lastTransitionTime":"2026-01-27T14:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:34 crc kubenswrapper[4698]: I0127 14:30:34.996724 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:44:04.617742567 +0000 UTC Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.012572 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:27Z\\\",\\\"message\\\":\\\"ice.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0078ea847 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 14:30:27.745340 6796 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:30:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.023944 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.036080 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.045303 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.057547 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.072676 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.086426 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.095929 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.096846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.096871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.096879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.096891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.096901 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.107544 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.117600 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.132269 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.144448 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.157310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.169427 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.184243 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.197533 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.199573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.199633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.199677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.199694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.199706 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.210946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.223356 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.302475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.302522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.302535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.302554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.302566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.404241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.404274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.404283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.404297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.404309 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.506880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.506916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.506925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.506943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.506964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.609002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.609054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.609066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.609082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.609093 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.711224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.711261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.711269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.711287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.711298 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.813527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.813573 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.813583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.813598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.813609 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.916232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.916293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.916306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.916323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.916336 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:35Z","lastTransitionTime":"2026-01-27T14:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.991744 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.991744 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:35 crc kubenswrapper[4698]: E0127 14:30:35.991967 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:35 crc kubenswrapper[4698]: E0127 14:30:35.991865 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:35 crc kubenswrapper[4698]: I0127 14:30:35.997908 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:29:36.503606114 +0000 UTC Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.018423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.018486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.018500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.018515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.018525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.120977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.121007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.121016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.121028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.121037 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.224166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.224210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.224221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.224256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.224267 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.327175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.327214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.327225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.327247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.327257 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.429897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.429963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.429973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.429987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.429997 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.532279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.532331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.532344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.532362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.532375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.634946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.635021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.635038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.635058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.635111 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.737913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.737950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.737961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.737977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.737989 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.840503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.840554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.840565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.840586 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.840598 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.943695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.943740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.943755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.943772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.943784 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:36Z","lastTransitionTime":"2026-01-27T14:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.991998 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.992081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:36 crc kubenswrapper[4698]: E0127 14:30:36.992428 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:36 crc kubenswrapper[4698]: E0127 14:30:36.992485 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:36 crc kubenswrapper[4698]: I0127 14:30:36.999073 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:00:41.331000078 +0000 UTC Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.006294 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.046684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.046735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.046745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.046760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.046770 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.149364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.149446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.149462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.149480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.149491 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.251741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.251790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.251825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.251841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.251852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.354185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.354276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.354293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.354315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.354334 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.457761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.457816 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.457827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.457846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.457860 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.561683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.561745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.561763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.561796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.561814 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.664774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.664836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.664854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.664877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.664894 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.767698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.767742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.767751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.767764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.767773 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.869881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.869917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.869926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.869941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.869951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.973014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.973052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.973063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.973078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.973091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:37Z","lastTransitionTime":"2026-01-27T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.991854 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.991959 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:37 crc kubenswrapper[4698]: E0127 14:30:37.992044 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:37 crc kubenswrapper[4698]: E0127 14:30:37.992096 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:37 crc kubenswrapper[4698]: I0127 14:30:37.999931 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:30:14.611049373 +0000 UTC Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.075885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.075927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.075943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.075963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.075975 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.178478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.178514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.178522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.178535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.178545 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.281174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.281234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.281248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.281265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.281278 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.383540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.383849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.383915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.383990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.384064 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.486522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.486805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.486870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.486938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.487008 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.589515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.589554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.589565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.589581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.589592 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.692336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.692388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.692398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.692413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.692428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.795263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.795310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.795322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.795338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.795351 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.897631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.897699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.897713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.897730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.897745 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:38Z","lastTransitionTime":"2026-01-27T14:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.992048 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:38 crc kubenswrapper[4698]: I0127 14:30:38.992122 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:38 crc kubenswrapper[4698]: E0127 14:30:38.992174 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:38 crc kubenswrapper[4698]: E0127 14:30:38.992258 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.000069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.000114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.000124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.000135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.000145 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.001112 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:02:31.470102735 +0000 UTC Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.102673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.102727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.102739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.102770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.102787 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.205766 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.205809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.205820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.205834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.205848 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.307756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.307798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.307809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.307825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.307836 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.410804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.411090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.411235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.411357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.411453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.514317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.514343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.514352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.514364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.514375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.617080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.617382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.617392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.617406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.617416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.720347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.720416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.720429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.720445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.720458 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.822580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.822619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.822628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.822656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.822667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.925415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.925481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.925494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.925512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.925524 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:39Z","lastTransitionTime":"2026-01-27T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.991188 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:39 crc kubenswrapper[4698]: I0127 14:30:39.991192 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:39 crc kubenswrapper[4698]: E0127 14:30:39.991466 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:39 crc kubenswrapper[4698]: E0127 14:30:39.991603 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.002015 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:55:01.517536897 +0000 UTC Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.027956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.028013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.028033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.028059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.028075 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.130085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.130117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.130146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.130160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.130169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.232609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.232712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.232725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.232745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.232757 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.334885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.334931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.334944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.334960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.334974 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.437791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.437825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.437853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.437867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.437875 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.540343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.540377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.540388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.540400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.540409 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.642891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.642955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.642966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.642980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.642992 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.745527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.745562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.745571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.745585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.745595 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.848038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.848077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.848098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.848114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.848124 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.950515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.950578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.950595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.950618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.950725 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:40Z","lastTransitionTime":"2026-01-27T14:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.991705 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:40 crc kubenswrapper[4698]: I0127 14:30:40.991759 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:40 crc kubenswrapper[4698]: E0127 14:30:40.991843 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:40 crc kubenswrapper[4698]: E0127 14:30:40.991957 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.002777 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:14:19.175949594 +0000 UTC Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.053266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.053302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.053310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.053323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.053332 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.156103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.156149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.156167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.156189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.156208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.258793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.259069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.259149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.259170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.259181 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.360972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.361013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.361027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.361045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.361056 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.464256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.464517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.464678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.464826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.464910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.567837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.567877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.567886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.567901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.567910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.671086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.671482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.671745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.671946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.672132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.775857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.775892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.775901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.775918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.775929 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.878496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.878522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.878530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.878542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.878552 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.982004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.982055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.982070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.982090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.982107 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:41Z","lastTransitionTime":"2026-01-27T14:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.991280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.991299 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:41 crc kubenswrapper[4698]: E0127 14:30:41.991439 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:41 crc kubenswrapper[4698]: E0127 14:30:41.991940 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:41 crc kubenswrapper[4698]: I0127 14:30:41.992112 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:30:41 crc kubenswrapper[4698]: E0127 14:30:41.992246 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.003051 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:02:13.957166292 +0000 UTC Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.084813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.084865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.084880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.084898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.084909 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.187704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.187753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.187764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.187781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.187791 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.289451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.289500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.289513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.289530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.289542 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.392208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.392242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.392251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.392264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.392273 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.494889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.494964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.494988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.495024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.495048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.597905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.597935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.597946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.597958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.597966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.700490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.700544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.700556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.700574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.700587 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.803304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.803372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.803390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.803413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.803430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.906085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.906126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.906138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.906154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.906166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:42Z","lastTransitionTime":"2026-01-27T14:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.991591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:42 crc kubenswrapper[4698]: I0127 14:30:42.991701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:42 crc kubenswrapper[4698]: E0127 14:30:42.991756 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:42 crc kubenswrapper[4698]: E0127 14:30:42.991834 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.003246 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:17:37.215403224 +0000 UTC Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.008885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.008924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.008940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.008961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.008976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.025960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.026002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.026011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.026026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.026035 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.043524 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.048500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.048592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.048622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.049282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.049310 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.062774 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.066470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.066527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.066544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.066560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.066572 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.080240 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.084154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.084202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.084213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.084231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.084241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.095685 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.099873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.099910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.099923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.099941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.099956 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.111598 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.111758 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.113277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.113316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.113330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.113347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.113359 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.215487 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.215531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.215542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.215558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.215568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.317451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.317510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.317520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.317536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.317547 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.420721 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.420772 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.420782 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.420810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.420821 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.524268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.524308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.524316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.524348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.524358 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.626941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.626973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.626981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.626993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.627002 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.729613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.729663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.729673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.729688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.729698 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.832208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.832266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.832283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.832306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.832323 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.934886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.934948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.934964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.934987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.935001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:43Z","lastTransitionTime":"2026-01-27T14:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.991961 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:43 crc kubenswrapper[4698]: I0127 14:30:43.992030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.992105 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:43 crc kubenswrapper[4698]: E0127 14:30:43.992351 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.004449 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:29:51.082860281 +0000 UTC Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.037897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.037966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.037983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.038006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.038023 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.140310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.140378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.140401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.140429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.140451 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.242621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.242675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.242686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.242705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.242715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.344688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.344722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.344732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.344748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.344758 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.446232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.446286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.446297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.446309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.446318 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.548934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.548990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.549003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.549018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.549030 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.651289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.651392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.651411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.651824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.651891 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.754265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.754303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.754310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.754325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.754334 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.856265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.856315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.856326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.856342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.856351 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.961558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.961588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.961596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.961608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.961617 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:44Z","lastTransitionTime":"2026-01-27T14:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.992129 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:44 crc kubenswrapper[4698]: I0127 14:30:44.992220 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:44 crc kubenswrapper[4698]: E0127 14:30:44.992301 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:44 crc kubenswrapper[4698]: E0127 14:30:44.992363 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.005261 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:22:01.07175378 +0000 UTC Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.008512 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce1118e-e5ad-4adb-8d50-c758116b45ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:29:09.491666 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:29:09.493926 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-900123067/tls.crt::/tmp/serving-cert-900123067/tls.key\\\\\\\"\\\\nI0127 14:29:24.319898 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:29:24.322681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:29:24.322759 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:29:24.322814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:29:24.322847 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:29:24.331581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 14:29:24.331608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:29:24.331618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:29:24.331622 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:29:24.331626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:29:24.331630 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 14:29:24.331748 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 14:29:24.333757 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.020047 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c70587-abb1-4e02-ab7e-223d57817925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a209c31294456c43022bb9df5fd230415596fc43ecdb2b6349114989c3ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94a4d586611f4ca0864567739d1ed225b7c63ebd6e6be9e5c0e385379de2ad01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95357819c206b188c1fd6bb937b2a57b480a9a61e95143de444141b16e4963ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.034326 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2kkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e135f0c-0c36-44f4-afeb-06994affb352\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:16Z\\\",\\\"message\\\":\\\"2026-01-27T14:29:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b\\\\n2026-01-27T14:29:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_953276ac-8835-451e-a4b4-8b6397c0df9b to /host/opt/cni/bin/\\\\n2026-01-27T14:29:31Z [verbose] multus-daemon started\\\\n2026-01-27T14:29:31Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:30:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2kkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.051164 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"709dfdd7-f928-4f0b-8f5a-c356614219cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bec8d64d7f0b7a45b57b810dcf1de846a388902b480230742da0a40da4e67b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb54d7728b2438444c924295722867c673c8b82768ed17b6dd2104d404e198e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpvcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.064803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.064841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.064851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.064867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.064879 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.068272 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.081777 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g9vj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2776dfc9-913b-42b0-9cf2-6fea98d83bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f471cea0dc934e7c77ac6ead7ec77f8feb7379b50a7bb5255d7100fbea635a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bk7ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g9vj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.094930 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e403fc5-7005-474c-8c75-b7906b481677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9227503b5cf0962f3fb8ece18f54d84ff2cdfe5628ebf0fa0cef7fe5695c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt2dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ndrd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.111197 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e045926d-2303-47ea-b25d-dc23982427e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e1bf1729f089f253e8e1259e40ef976b85e433358642b6231f081c494851048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c935f517f9ca140d63d1637ef8c01a4a41b94fa72e1086eb6184ae78a6d8da8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa061cb18143e07dc72a6b75d78b5959b40a9969030ff21d402863b85eba6f91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be40eff60b17b2c010a5e4394fb6843ceb8ee2752781ccbf0030afdd212b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3018d87c9354e8b8d90c59a4a45e982d0b35b89a0b0a9fde703e9452cf5f0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de493498bbac49791f6b7409916fbbb62f64bed0def214ea6a06f5182bf7ad3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14af7d5dd3526aef22f7e95409ff88c3bbbc29e6dd9dd7c41b9be5bab49bf074\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66snv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vg6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.129593 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:30:27Z\\\",\\\"message\\\":\\\"ice.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0078ea847 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 14:30:27.745340 6796 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:30:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7gpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xmpm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.139855 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"621bb20d-2ffa-4e89-b522-d04b4764fcc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gz87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpvsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.149484 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67da6e9-5dc3-469a-8a1a-a2b287e96281\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99ac884289d1dcab871d1db10e9992389170de25aeb71d84aaad1348eafd4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e39904f887483a435cadd506be29c1513b2c9dbc144a61549f74f2c93fa6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd43402ec8ba658de9fc9d84d14600829a8ae019aceb606fc2bf781dbe13ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbb07f8bc0c4d5171813dd406a52fc3c99985102f65c43e8e04bf8acc21f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.163981 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c951f69-23b8-41c0-8d43-60097686223a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5f7ffc9337f1ee226951fc2bac9235815704df49855f4ee6c9fe391970df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff6cddc7b21de4968fbb9eacccd070d4e8ff2d0514da0a0b878313bb11a5188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.167262 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.167356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.167382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.167418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.167454 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.177134 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7402df1ba165eb747293c1d23e4ea89c50cf2342bf52db437c11c283f68755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.190040 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.200746 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dae57de05788973b988fee332168db475b66bb71b4a0570e7d95341dfe84a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.209792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flx9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9fcf55-4a50-4a87-937b-975bc7e00bfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cb86e0c67991a076cebc33eb390f49c0f4c40332c423936dd8f3c59b9f7474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flx9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.229132 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b5c10d-5159-43a4-8c36-312efca59cc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8af55661c2961a592f176699c05742ca89cc5df26d1b96747403a59970eda5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b1f6504828ab5deb1c86b048c3e766ba0983cf813b1751a077a3105c21754a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://807d7a009156f31527d166edcbe520f3a479c730f56f9e946f29e49734f72826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a6817e50b9384743f6881c733af45511ee78ad9a3a7cea4d3e7e4e1c394e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cbd77e460eb98eaa68e630886ad37c10e9b1c40828629431652eafbfea2b76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98db7f3d3019e28062f57111220875909c51f1644b93f0e7ad4e14575cf3abcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98db7f3d3019e28062f57111220875909c51f1644b93f0e7ad4e14575cf3abcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a79834dc1d2246c3518e1aae6d806f0851840c259b012da305149433627fa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a79834dc1d2246c3518e1aae6d806f0851840c259b012da305149433627fa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://34ab310837721b7012a0be773e777ae2b611be1e4143b44548ad3d4f93909a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ab310837721b7012a0be773e777ae2b611be1e4143b44548ad3d4f93909a04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:29:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.239673 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.249257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:29:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625a1f8c29f9277903045e66df7a55a4d791239b141aba99572db228ea2735ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://314dcaad0f8949a4671b47667f654bdfacfb6b200438cfbd70401455f6e188b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:29:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.270182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.270230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.270239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.270252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.270260 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.371810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.371854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.371865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.371880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.371891 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.473468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.473516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.473528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.473545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.473558 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.575698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.575760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.575774 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.575791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.575803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.677949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.677994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.678006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.678022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.678034 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.780883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.780923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.780933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.780948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.780957 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.883606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.883671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.883682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.883699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.883710 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.986460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.986515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.986524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.986538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.986547 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:45Z","lastTransitionTime":"2026-01-27T14:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.991434 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:45 crc kubenswrapper[4698]: I0127 14:30:45.991562 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:45 crc kubenswrapper[4698]: E0127 14:30:45.991669 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:45 crc kubenswrapper[4698]: E0127 14:30:45.991737 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.006138 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:20:00.41473259 +0000 UTC Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.089206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.089267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.089281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.089299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.089312 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.192051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.192120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.192132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.192147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.192158 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.294665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.294716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.294726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.294740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.294753 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.397576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.397611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.397618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.397651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.397671 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.464536 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:46 crc kubenswrapper[4698]: E0127 14:30:46.464941 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:30:46 crc kubenswrapper[4698]: E0127 14:30:46.465020 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs podName:621bb20d-2ffa-4e89-b522-d04b4764fcc3 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:50.465002159 +0000 UTC m=+166.141779624 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs") pod "network-metrics-daemon-lpvsw" (UID: "621bb20d-2ffa-4e89-b522-d04b4764fcc3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.499923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.499958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.499966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.499978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.499988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.601800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.601845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.601857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.601873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.601884 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.704548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.704602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.704611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.704630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.704675 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.807423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.807467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.807479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.807497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.807510 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.909707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.909802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.909855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.909869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.909878 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:46Z","lastTransitionTime":"2026-01-27T14:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.991820 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:46 crc kubenswrapper[4698]: I0127 14:30:46.992064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:46 crc kubenswrapper[4698]: E0127 14:30:46.992179 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:46 crc kubenswrapper[4698]: E0127 14:30:46.992351 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.006699 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:38:34.242579653 +0000 UTC Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.012127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.012166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.012178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.012191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.012201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.114384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.114427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.114438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.114453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.114464 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.216732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.216781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.216793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.216810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.216825 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.319097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.319134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.319151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.319198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.319208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.421728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.421771 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.421781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.421798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.421810 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.524708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.524753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.524764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.524780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.524794 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.628086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.628129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.628139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.628157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.628166 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.731067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.731126 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.731137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.731151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.731164 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.833963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.834006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.834014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.834027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.834036 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.936502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.936545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.936556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.936574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.936586 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:47Z","lastTransitionTime":"2026-01-27T14:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.992132 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:47 crc kubenswrapper[4698]: I0127 14:30:47.992175 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:47 crc kubenswrapper[4698]: E0127 14:30:47.992302 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:47 crc kubenswrapper[4698]: E0127 14:30:47.992518 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.007693 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:46:17.943130146 +0000 UTC Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.039220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.039288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.039303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.039321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.039335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.142015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.142160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.142184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.142199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.142211 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.244770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.244813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.244824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.244838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.244850 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.347056 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.347100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.347111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.347127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.347139 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.449097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.449140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.449151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.449166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.449177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.551362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.551413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.551421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.551434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.551443 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.653945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.653986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.653997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.654012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.654023 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.756321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.756370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.756380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.756393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.756401 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.859034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.859083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.859100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.859117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.859127 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.961013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.961054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.961064 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.961078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.961090 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:48Z","lastTransitionTime":"2026-01-27T14:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.991326 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:48 crc kubenswrapper[4698]: I0127 14:30:48.991483 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:48 crc kubenswrapper[4698]: E0127 14:30:48.991560 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:48 crc kubenswrapper[4698]: E0127 14:30:48.991963 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.008426 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:16:04.822988833 +0000 UTC Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.064304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.064382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.064396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.064421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.064439 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.167455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.167518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.167530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.167547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.167558 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.270355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.270409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.270420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.270436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.270449 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.372491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.372542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.372555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.372571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.372582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.474724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.474768 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.474781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.474799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.474810 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.576805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.576883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.576895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.576911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.576923 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.679786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.679845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.679857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.679870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.679881 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.783673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.783744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.783757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.784520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.784625 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.887483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.887535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.887583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.887608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.887667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.990099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.990146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.990157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.990174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.990185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:49Z","lastTransitionTime":"2026-01-27T14:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.991249 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:49 crc kubenswrapper[4698]: E0127 14:30:49.991380 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:49 crc kubenswrapper[4698]: I0127 14:30:49.991255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:49 crc kubenswrapper[4698]: E0127 14:30:49.991471 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.009057 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:53:32.112540047 +0000 UTC Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.093368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.093411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.093427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.093442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.093453 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.196818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.196874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.196886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.196906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.196927 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.299571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.299613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.299626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.299654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.299664 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.401870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.401928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.401941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.401959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.401973 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.504611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.504692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.504709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.504727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.504737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.607888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.607956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.607968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.607984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.607995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.710701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.710741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.710749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.710764 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.710775 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.815005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.815082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.815100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.815120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.815133 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.918734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.918820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.918832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.918856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.918871 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:50Z","lastTransitionTime":"2026-01-27T14:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.992202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:50 crc kubenswrapper[4698]: I0127 14:30:50.992398 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:50 crc kubenswrapper[4698]: E0127 14:30:50.992741 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:50 crc kubenswrapper[4698]: E0127 14:30:50.992787 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.009822 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:14:32.157541533 +0000 UTC Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.020924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.020976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.020985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.020998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.021007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.123443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.123488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.123500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.123516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.123529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.225494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.225528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.225539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.225555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.225567 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.328657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.328709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.328741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.328758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.328774 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.431923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.431980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.431995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.432014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.432027 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.534788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.534841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.534854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.534873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.534886 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.637643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.637679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.637712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.637725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.637734 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.741462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.741502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.741511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.741525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.741535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.844334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.844375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.844387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.844402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.844413 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.947235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.947289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.947299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.947316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.947327 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:51Z","lastTransitionTime":"2026-01-27T14:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.991847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:51 crc kubenswrapper[4698]: I0127 14:30:51.991940 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:51 crc kubenswrapper[4698]: E0127 14:30:51.991987 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:51 crc kubenswrapper[4698]: E0127 14:30:51.992101 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.011239 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:15:46.881262599 +0000 UTC Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.049403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.049697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.049784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.049861 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.049967 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.152315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.152363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.152377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.152396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.152409 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.255102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.255379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.255454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.255529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.255596 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.358556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.358602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.358614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.358651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.358667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.460520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.460566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.460580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.460599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.460612 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.563042 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.563094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.563104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.563118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.563128 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.666020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.666071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.666085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.666102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.666115 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.768911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.768964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.768977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.768994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.769007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.871280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.871326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.871343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.871358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.871369 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.973380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.973416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.973427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.973442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.973452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:52Z","lastTransitionTime":"2026-01-27T14:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.991887 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:52 crc kubenswrapper[4698]: E0127 14:30:52.992061 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:52 crc kubenswrapper[4698]: I0127 14:30:52.992127 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:52 crc kubenswrapper[4698]: E0127 14:30:52.992278 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.012384 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:27:54.77515068 +0000 UTC Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.075867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.075919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.075934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.075952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.075980 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.179551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.179688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.179725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.179767 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.179793 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.282496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.282545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.282557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.282575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.282588 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.302040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.302110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.302119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.302134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.302147 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.315738 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.319863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.319900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.319908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.319922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.319931 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.333704 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.337825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.338103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.338223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.338313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.338388 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.353383 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.358745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.358784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.358795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.358812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.358825 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.372317 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.376043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.376069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.376077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.376089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.376097 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.387681 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:30:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d78a2be-22ac-47e6-a326-83038cc10e0c\\\",\\\"systemUUID\\\":\\\"3b71cf61-a3fa-4076-a23c-5d695e40fc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:30:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.388047 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.389720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.389832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.389923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.390017 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.390143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.492710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.492982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.493127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.493235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.493325 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.596108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.596140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.596148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.596159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.596167 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.698882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.698921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.698933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.698951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.698962 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.801081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.801352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.801571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.801763 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.801939 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.903926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.903975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.903989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.904005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.904018 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:53Z","lastTransitionTime":"2026-01-27T14:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.991748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.991887 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:53 crc kubenswrapper[4698]: I0127 14:30:53.992015 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:53 crc kubenswrapper[4698]: E0127 14:30:53.992330 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.007452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.007499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.007508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.007525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.007535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.012801 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:30:23.813071274 +0000 UTC Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.110656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.110717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.110730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.110752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.110761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.213304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.213344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.213354 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.213366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.213376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.315797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.315834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.315842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.315902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.315914 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.417922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.417961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.417973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.417986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.417995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.520310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.520349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.520362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.520377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.520389 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.623277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.623321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.623330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.623346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.623355 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.725720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.725752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.725760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.725773 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.725781 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.828654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.828689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.828702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.828716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.828727 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.931867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.931907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.931921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.931935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.931950 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:54Z","lastTransitionTime":"2026-01-27T14:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.991841 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:54 crc kubenswrapper[4698]: E0127 14:30:54.991977 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:54 crc kubenswrapper[4698]: I0127 14:30:54.991988 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:54 crc kubenswrapper[4698]: E0127 14:30:54.992270 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.013805 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:07:52.036248929 +0000 UTC Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.014792 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2kkn" podStartSLOduration=87.014776396 podStartE2EDuration="1m27.014776396s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.014150208 +0000 UTC m=+110.690927693" watchObservedRunningTime="2026-01-27 14:30:55.014776396 +0000 UTC m=+110.691553861" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.030865 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpvcj" podStartSLOduration=87.030843795 podStartE2EDuration="1m27.030843795s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.030781343 +0000 UTC m=+110.707558828" watchObservedRunningTime="2026-01-27 14:30:55.030843795 +0000 UTC m=+110.707621260" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.037997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.038039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.038051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.038069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.038080 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.069287 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.0692655 podStartE2EDuration="1m24.0692655s" podCreationTimestamp="2026-01-27 14:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.06892546 +0000 UTC m=+110.745702935" watchObservedRunningTime="2026-01-27 14:30:55.0692655 +0000 UTC m=+110.746042985" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.069440 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.069435085 podStartE2EDuration="1m30.069435085s" podCreationTimestamp="2026-01-27 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.054920258 +0000 UTC m=+110.731697773" watchObservedRunningTime="2026-01-27 14:30:55.069435085 +0000 UTC m=+110.746212570" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.101626 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podStartSLOduration=87.101605525 podStartE2EDuration="1m27.101605525s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.085175875 +0000 UTC m=+110.761953360" watchObservedRunningTime="2026-01-27 14:30:55.101605525 +0000 UTC m=+110.778382990" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.102053 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vg6nd" podStartSLOduration=87.102046717 podStartE2EDuration="1m27.102046717s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.101244074 +0000 UTC m=+110.778021549" watchObservedRunningTime="2026-01-27 14:30:55.102046717 +0000 UTC m=+110.778824182" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.141742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.141778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.141788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.141804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.141815 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.165857 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-g9vj8" podStartSLOduration=87.165841122 podStartE2EDuration="1m27.165841122s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.165613365 +0000 UTC m=+110.842390830" watchObservedRunningTime="2026-01-27 14:30:55.165841122 +0000 UTC m=+110.842618577" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.244180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.244227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.244240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.244257 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.244269 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.262272 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-flx9b" podStartSLOduration=87.262250148 podStartE2EDuration="1m27.262250148s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.230968593 +0000 UTC m=+110.907746068" watchObservedRunningTime="2026-01-27 14:30:55.262250148 +0000 UTC m=+110.939027613" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.262603 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.262596199 podStartE2EDuration="57.262596199s" podCreationTimestamp="2026-01-27 14:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.2619504 +0000 UTC m=+110.938727855" watchObservedRunningTime="2026-01-27 14:30:55.262596199 +0000 UTC m=+110.939373664" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.277962 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=44.277942768 podStartE2EDuration="44.277942768s" podCreationTimestamp="2026-01-27 14:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.27768251 +0000 UTC m=+110.954459985" watchObservedRunningTime="2026-01-27 14:30:55.277942768 +0000 UTC m=+110.954720233" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.319281 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.319268804 podStartE2EDuration="18.319268804s" podCreationTimestamp="2026-01-27 14:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:55.318842172 +0000 UTC m=+110.995619637" watchObservedRunningTime="2026-01-27 14:30:55.319268804 +0000 UTC m=+110.996046269" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.346732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.347038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.347123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.347218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.347299 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.450229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.450286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.450299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.450378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.450390 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.553321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.553361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.553374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.553390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.553407 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.655572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.655601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.655609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.655620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.655629 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.757550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.757611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.757658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.757689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.757705 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.859938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.859978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.859987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.860000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.860009 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.962195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.962453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.962533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.962609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.962711 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:55Z","lastTransitionTime":"2026-01-27T14:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.991725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:55 crc kubenswrapper[4698]: E0127 14:30:55.991874 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:55 crc kubenswrapper[4698]: I0127 14:30:55.992482 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:55 crc kubenswrapper[4698]: E0127 14:30:55.992730 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.014829 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:11:23.00658975 +0000 UTC Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.065698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.065751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.065761 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.065777 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.065788 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.168602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.168657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.168668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.168685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.168696 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.271391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.271695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.271788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.271903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.272005 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.374084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.374133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.374145 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.374164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.374176 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.476858 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.476903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.476915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.476932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.476944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.578706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.578741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.578749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.578762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.578771 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.681543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.681597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.681612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.681633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.681667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.784681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.784756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.784770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.784797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.784814 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.888107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.888140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.888149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.888161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.888170 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991302 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:56Z","lastTransitionTime":"2026-01-27T14:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:56 crc kubenswrapper[4698]: E0127 14:30:56.991737 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.991748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:56 crc kubenswrapper[4698]: E0127 14:30:56.992548 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:56 crc kubenswrapper[4698]: I0127 14:30:56.993019 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:30:56 crc kubenswrapper[4698]: E0127 14:30:56.993252 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xmpm6_openshift-ovn-kubernetes(c59a9d01-79ce-42d9-a41d-39d7d73cb03e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.015758 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:33:31.190811583 +0000 UTC Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.094060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.094102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.094111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.094125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.094134 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.196490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.196534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.196545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.196560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.196571 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.299352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.299389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.299399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.299415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.299428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.401555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.401595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.401603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.401616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.401625 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.503865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.503903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.503913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.503928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.503938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.606706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.606755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.606769 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.606788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.606801 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.709555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.709613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.709625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.709658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.709670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.812326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.812387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.812399 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.812416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.812428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.915469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.915514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.915525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.915541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.915553 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:57Z","lastTransitionTime":"2026-01-27T14:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.991255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:57 crc kubenswrapper[4698]: I0127 14:30:57.991296 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:57 crc kubenswrapper[4698]: E0127 14:30:57.991413 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:57 crc kubenswrapper[4698]: E0127 14:30:57.991504 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.016010 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:42:53.951633771 +0000 UTC Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.017998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.018023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.018032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.018046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.018055 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.120241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.120279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.120288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.120301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.120310 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.223002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.223047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.223059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.223074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.223087 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.325462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.325501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.325511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.325526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.325537 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.428077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.428118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.428137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.428155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.428165 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.530494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.530577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.530610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.530778 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.530815 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.633659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.633707 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.633720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.633737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.633749 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.735806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.735844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.735855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.735871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.735882 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.837736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.837781 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.837792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.837807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.837817 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.939854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.939903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.939911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.939924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.939934 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:58Z","lastTransitionTime":"2026-01-27T14:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.991613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:30:58 crc kubenswrapper[4698]: I0127 14:30:58.991694 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:30:58 crc kubenswrapper[4698]: E0127 14:30:58.991805 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:30:58 crc kubenswrapper[4698]: E0127 14:30:58.991924 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.017050 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:03:18.694866974 +0000 UTC Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.042185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.042243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.042253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.042270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.042282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.144661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.144693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.144704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.144720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.144732 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.247275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.247314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.247325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.247341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.247352 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.349169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.349214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.349223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.349237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.349248 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.451698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.451733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.451744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.451759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.451772 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.554698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.555030 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.555125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.555209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.555284 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.657801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.657843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.657853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.657868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.657878 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.760124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.760175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.760223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.760247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.760265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.862610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.862715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.862762 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.862797 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.862813 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.965055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.965084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.965092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.965104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.965113 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:30:59Z","lastTransitionTime":"2026-01-27T14:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.991855 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:30:59 crc kubenswrapper[4698]: E0127 14:30:59.991983 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:30:59 crc kubenswrapper[4698]: I0127 14:30:59.992166 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:30:59 crc kubenswrapper[4698]: E0127 14:30:59.992215 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.017934 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:29:01.778169635 +0000 UTC Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.067881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.067912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.067921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.067933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.067941 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.169980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.170149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.170176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.170207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.170230 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.272945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.272985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.272994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.273017 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.273028 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.375727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.375776 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.375787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.375805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.375824 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.477595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.477668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.477679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.477694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.477707 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.580018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.580065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.580076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.580094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.580107 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.682610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.682912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.683051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.683137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.683211 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.785223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.785265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.785275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.785289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.785302 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.888346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.888664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.888760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.888880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.888969 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991110 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:00 crc kubenswrapper[4698]: E0127 14:31:00.991222 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991256 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:00 crc kubenswrapper[4698]: E0127 14:31:00.991334 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:00 crc kubenswrapper[4698]: I0127 14:31:00.991620 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:00Z","lastTransitionTime":"2026-01-27T14:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.018155 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:59:30.289064548 +0000 UTC Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.094226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.094256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.094266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.094279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.094293 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.196683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.196724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.196735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.196750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.196761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.298959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.299065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.299088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.299119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.299143 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.401874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.401934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.401943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.401957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.401983 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.504712 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.504779 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.504792 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.504810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.504828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.607415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.607478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.607491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.607511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.607526 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.709979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.710013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.710021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.710035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.710044 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.812269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.812311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.812320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.812333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.812342 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.914883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.914938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.914951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.914971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.914983 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:01Z","lastTransitionTime":"2026-01-27T14:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.991584 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:01 crc kubenswrapper[4698]: I0127 14:31:01.991613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:01 crc kubenswrapper[4698]: E0127 14:31:01.991864 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:01 crc kubenswrapper[4698]: E0127 14:31:01.991776 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.017117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.017196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.017219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.017251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.017273 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.019305 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:10:44.398411991 +0000 UTC Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.119826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.119867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.119881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.119898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.119910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.221880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.221921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.221929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.221943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.221954 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.324590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.324623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.324658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.324674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.324685 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.426834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.426894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.426902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.426920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.426932 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.529704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.529744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.529756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.529770 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.529780 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.632392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.632460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.632480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.632503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.632520 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.734335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.734363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.734371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.734393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.734402 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.836666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.836706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.836717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.836731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.836743 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.938607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.938677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.938687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.938704 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.938715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:02Z","lastTransitionTime":"2026-01-27T14:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.991136 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:02 crc kubenswrapper[4698]: I0127 14:31:02.991202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:02 crc kubenswrapper[4698]: E0127 14:31:02.991350 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:02 crc kubenswrapper[4698]: E0127 14:31:02.991402 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.019456 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:57:57.12259686 +0000 UTC Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.040977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.041032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.041045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.041063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.041080 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.143411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.143449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.143464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.143482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.143492 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.245540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.245601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.245614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.245632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.245669 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.348114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.348155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.348172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.348189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.348201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.450566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.450610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.450618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.450658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.450670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.493593 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/1.log" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.494187 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/0.log" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.494260 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e135f0c-0c36-44f4-afeb-06994affb352" containerID="89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9" exitCode=1 Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.494297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerDied","Data":"89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.494334 4698 scope.go:117] "RemoveContainer" containerID="6d17862aabd67b485023ca018499f7224e0bd91ec0aed6bbce5a187f319e3140" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.494783 4698 scope.go:117] "RemoveContainer" containerID="89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9" Jan 27 14:31:03 crc kubenswrapper[4698]: E0127 14:31:03.494983 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-g2kkn_openshift-multus(4e135f0c-0c36-44f4-afeb-06994affb352)\"" pod="openshift-multus/multus-g2kkn" podUID="4e135f0c-0c36-44f4-afeb-06994affb352" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.553059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.553391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.553403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.553419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.553432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.594856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.594897 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.594907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.594924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.594934 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:31:03Z","lastTransitionTime":"2026-01-27T14:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.637108 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7"] Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.637696 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.641057 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.642012 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.642123 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.642076 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.744098 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.744149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.744170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.744199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.744218 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845541 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845597 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845664 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.845866 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.846110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.846699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.851750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.864713 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpqt7\" (UID: \"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.951174 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.992099 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:03 crc kubenswrapper[4698]: E0127 14:31:03.992238 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:03 crc kubenswrapper[4698]: I0127 14:31:03.992116 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:03 crc kubenswrapper[4698]: E0127 14:31:03.992373 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.019597 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:18:25.046058352 +0000 UTC Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.019689 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.027663 4698 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.500139 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" event={"ID":"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7","Type":"ContainerStarted","Data":"a2495a48625bc5ad3f9ffd7fc1beb0b09df6a38db4abbefaa6af876313442e89"} Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.500243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" event={"ID":"d8c2a5e6-53db-4ace-9a18-12c6b7c3bcd7","Type":"ContainerStarted","Data":"1e6f569a8defff0268055e7fd9c6d33f433274aba092b07818ceaa2bdd76b844"} Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.503284 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/1.log" Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.516723 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpqt7" podStartSLOduration=96.516700786 podStartE2EDuration="1m36.516700786s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:04.516169621 +0000 UTC m=+120.192947086" watchObservedRunningTime="2026-01-27 14:31:04.516700786 +0000 UTC m=+120.193478261" Jan 27 14:31:04 crc kubenswrapper[4698]: E0127 14:31:04.986450 4698 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.991123 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:04 crc kubenswrapper[4698]: I0127 14:31:04.991144 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:04 crc kubenswrapper[4698]: E0127 14:31:04.992682 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:04 crc kubenswrapper[4698]: E0127 14:31:04.992574 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:05 crc kubenswrapper[4698]: E0127 14:31:05.078785 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:31:05 crc kubenswrapper[4698]: I0127 14:31:05.991819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:05 crc kubenswrapper[4698]: I0127 14:31:05.991835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:05 crc kubenswrapper[4698]: E0127 14:31:05.991952 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:05 crc kubenswrapper[4698]: E0127 14:31:05.992074 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:06 crc kubenswrapper[4698]: I0127 14:31:06.992130 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:06 crc kubenswrapper[4698]: I0127 14:31:06.992238 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:06 crc kubenswrapper[4698]: E0127 14:31:06.992257 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:06 crc kubenswrapper[4698]: E0127 14:31:06.992374 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:07 crc kubenswrapper[4698]: I0127 14:31:07.991815 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:07 crc kubenswrapper[4698]: E0127 14:31:07.991991 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:07 crc kubenswrapper[4698]: I0127 14:31:07.991835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:07 crc kubenswrapper[4698]: E0127 14:31:07.992252 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:07 crc kubenswrapper[4698]: I0127 14:31:07.992937 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.517785 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/3.log" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.521252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerStarted","Data":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.521682 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.552431 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podStartSLOduration=100.552413537 podStartE2EDuration="1m40.552413537s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:08.551624675 +0000 UTC m=+124.228402160" watchObservedRunningTime="2026-01-27 14:31:08.552413537 +0000 UTC m=+124.229191002" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.771783 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lpvsw"] Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.771877 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:08 crc kubenswrapper[4698]: E0127 14:31:08.772030 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.992138 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:08 crc kubenswrapper[4698]: I0127 14:31:08.992143 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:08 crc kubenswrapper[4698]: E0127 14:31:08.992341 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:08 crc kubenswrapper[4698]: E0127 14:31:08.992496 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:09 crc kubenswrapper[4698]: I0127 14:31:09.991561 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:09 crc kubenswrapper[4698]: I0127 14:31:09.991600 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:09 crc kubenswrapper[4698]: E0127 14:31:09.991764 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:09 crc kubenswrapper[4698]: E0127 14:31:09.991861 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:10 crc kubenswrapper[4698]: E0127 14:31:10.081119 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:31:10 crc kubenswrapper[4698]: I0127 14:31:10.992246 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:10 crc kubenswrapper[4698]: I0127 14:31:10.992316 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:10 crc kubenswrapper[4698]: E0127 14:31:10.992423 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:10 crc kubenswrapper[4698]: E0127 14:31:10.992533 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:11 crc kubenswrapper[4698]: I0127 14:31:11.992063 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:11 crc kubenswrapper[4698]: I0127 14:31:11.992132 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:11 crc kubenswrapper[4698]: E0127 14:31:11.992218 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:11 crc kubenswrapper[4698]: E0127 14:31:11.992584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:12 crc kubenswrapper[4698]: I0127 14:31:12.992886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:12 crc kubenswrapper[4698]: E0127 14:31:12.993494 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:12 crc kubenswrapper[4698]: I0127 14:31:12.992885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:12 crc kubenswrapper[4698]: E0127 14:31:12.993746 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:13 crc kubenswrapper[4698]: I0127 14:31:13.991769 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:13 crc kubenswrapper[4698]: I0127 14:31:13.991883 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:13 crc kubenswrapper[4698]: E0127 14:31:13.991922 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:13 crc kubenswrapper[4698]: E0127 14:31:13.992103 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:14 crc kubenswrapper[4698]: I0127 14:31:14.991847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:14 crc kubenswrapper[4698]: I0127 14:31:14.993246 4698 scope.go:117] "RemoveContainer" containerID="89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9" Jan 27 14:31:14 crc kubenswrapper[4698]: I0127 14:31:14.993290 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:14 crc kubenswrapper[4698]: E0127 14:31:14.993373 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:14 crc kubenswrapper[4698]: E0127 14:31:14.993538 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:15 crc kubenswrapper[4698]: E0127 14:31:15.081526 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:31:15 crc kubenswrapper[4698]: I0127 14:31:15.542732 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/1.log" Jan 27 14:31:15 crc kubenswrapper[4698]: I0127 14:31:15.542797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerStarted","Data":"91a0ad962cfd3e8dd9cfc25516b20509e0465ea2c094eaabc513521dcb809be2"} Jan 27 14:31:15 crc kubenswrapper[4698]: I0127 14:31:15.992149 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:15 crc kubenswrapper[4698]: I0127 14:31:15.992198 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:15 crc kubenswrapper[4698]: E0127 14:31:15.992297 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:15 crc kubenswrapper[4698]: E0127 14:31:15.992425 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:16 crc kubenswrapper[4698]: I0127 14:31:16.991599 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:16 crc kubenswrapper[4698]: E0127 14:31:16.991826 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:16 crc kubenswrapper[4698]: I0127 14:31:16.991966 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:16 crc kubenswrapper[4698]: E0127 14:31:16.992216 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:17 crc kubenswrapper[4698]: I0127 14:31:17.991788 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:17 crc kubenswrapper[4698]: I0127 14:31:17.991803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:17 crc kubenswrapper[4698]: E0127 14:31:17.991929 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:17 crc kubenswrapper[4698]: E0127 14:31:17.992047 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:18 crc kubenswrapper[4698]: I0127 14:31:18.991757 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:18 crc kubenswrapper[4698]: E0127 14:31:18.992266 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:31:18 crc kubenswrapper[4698]: I0127 14:31:18.992823 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:18 crc kubenswrapper[4698]: E0127 14:31:18.993340 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:31:19 crc kubenswrapper[4698]: I0127 14:31:19.991322 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:19 crc kubenswrapper[4698]: I0127 14:31:19.991445 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:19 crc kubenswrapper[4698]: E0127 14:31:19.991512 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpvsw" podUID="621bb20d-2ffa-4e89-b522-d04b4764fcc3" Jan 27 14:31:19 crc kubenswrapper[4698]: E0127 14:31:19.991706 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:31:20 crc kubenswrapper[4698]: I0127 14:31:20.992060 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:20 crc kubenswrapper[4698]: I0127 14:31:20.992125 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:20 crc kubenswrapper[4698]: I0127 14:31:20.995204 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:31:20 crc kubenswrapper[4698]: I0127 14:31:20.995204 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.991599 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.991679 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.994441 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.994499 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.994673 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:31:21 crc kubenswrapper[4698]: I0127 14:31:21.994676 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.166245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.208397 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5f9l"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.209113 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.209454 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.209709 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.209911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.210254 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.214787 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.215013 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.215241 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.215308 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.220377 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.220472 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.222190 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.225724 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.225740 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.222476 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.224844 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.225269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.225437 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.225438 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.237198 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.237384 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.238067 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.239707 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.239767 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.240101 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.240305 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.240314 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.240729 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h864m"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.241144 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bdrpp"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.241412 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qpzns"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.241563 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.241902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.242003 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.242217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.242493 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.243601 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.244158 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.245057 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.248834 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6xmqh"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.249679 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8slw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.250401 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.251050 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.251217 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.251837 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.252001 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.252330 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.252334 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.252702 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.253321 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.253490 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dhnh8"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.254578 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.255427 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.255629 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.255854 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.255908 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.258917 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.259416 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.259718 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.259743 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bz9jw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.259976 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.261007 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.261352 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.261491 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.261549 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.262148 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.262252 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.262860 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263188 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263228 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263419 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263573 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263626 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263579 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263850 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.263912 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.264025 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.264060 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.264303 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.264390 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.265902 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.271680 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.271977 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.272258 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.272415 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.272583 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.272868 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.273055 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.273212 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.273612 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.273965 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.290143 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.291113 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.291350 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.291573 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.291830 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.292370 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.293059 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.293725 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.294599 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qb472"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.295442 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.295821 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.295916 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.296081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.296294 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.296421 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.296540 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.297356 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.297657 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298218 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298293 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298420 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298766 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298809 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.298844 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.301940 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.302819 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.303161 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.305251 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.309873 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.311951 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.312187 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.312320 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.312534 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.312653 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.312831 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.314126 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.314367 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.315378 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.324187 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.324815 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.324955 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.325340 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.325616 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.326336 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.326893 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.327072 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.327734 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.328086 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8bg4r"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.329819 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.333285 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h864m"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.333438 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.334653 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.340740 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5f9l"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.340835 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345022 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxfx\" (UniqueName: \"kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe15858-3d24-48fb-b534-a4b484e027e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-encryption-config\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-node-pullsecrets\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345197 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-serving-cert\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-config\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-client\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345302 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-image-import-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345377 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345423 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-serving-cert\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zptc\" (UniqueName: \"kubernetes.io/projected/64b274f6-5293-4c0e-a51a-dca8518c5a40-kube-api-access-2zptc\") pod \"downloads-7954f5f757-bdrpp\" (UID: \"64b274f6-5293-4c0e-a51a-dca8518c5a40\") " pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-images\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345491 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe15858-3d24-48fb-b534-a4b484e027e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345550 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345591 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345732 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-audit\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345750 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345783 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345814 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345867 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-audit-policies\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345918 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-serving-cert\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345978 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.345996 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a18531-ffc7-42d9-bba7-78d72b032c39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346015 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-config\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346030 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-etcd-client\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtwz\" (UniqueName: \"kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608093bb-ab9f-47bf-bf66-938266244574-audit-dir\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346090 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cbm\" (UniqueName: \"kubernetes.io/projected/f57848ff-da41-4c6a-9586-c57676b73c90-kube-api-access-x6cbm\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346107 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq56w\" (UniqueName: \"kubernetes.io/projected/dfe15858-3d24-48fb-b534-a4b484e027e3-kube-api-access-kq56w\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8b9\" (UniqueName: \"kubernetes.io/projected/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-kube-api-access-fq8b9\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346152 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346180 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-audit-dir\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346203 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgfc\" (UniqueName: \"kubernetes.io/projected/608093bb-ab9f-47bf-bf66-938266244574-kube-api-access-mqgfc\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346222 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346239 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvv4t\" (UniqueName: \"kubernetes.io/projected/77a18531-ffc7-42d9-bba7-78d72b032c39-kube-api-access-fvv4t\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346256 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-encryption-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.346596 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.347058 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.347375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.347573 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.348079 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.351327 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.354447 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.354533 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.354548 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.355183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.355543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.364969 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.368066 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.374918 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.375775 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.376174 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.378302 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.379655 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.380075 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.382354 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.383450 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.386202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.386895 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.386905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.392046 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.393109 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.396780 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.399756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.412696 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.414185 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.420939 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.427924 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.455821 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe15858-3d24-48fb-b534-a4b484e027e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.455902 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjzv\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-kube-api-access-xcjzv\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.455911 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.455965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.455994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456042 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-audit\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456169 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7ee1b04-33fb-452c-917e-ea08b3f489a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456252 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456412 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456494 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-audit-policies\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456549 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456695 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a18531-ffc7-42d9-bba7-78d72b032c39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456741 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-serving-cert\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456771 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-config\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-etcd-client\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608093bb-ab9f-47bf-bf66-938266244574-audit-dir\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.456950 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6cbm\" (UniqueName: \"kubernetes.io/projected/f57848ff-da41-4c6a-9586-c57676b73c90-kube-api-access-x6cbm\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457666 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtwz\" (UniqueName: \"kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq56w\" (UniqueName: \"kubernetes.io/projected/dfe15858-3d24-48fb-b534-a4b484e027e3-kube-api-access-kq56w\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8b9\" (UniqueName: \"kubernetes.io/projected/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-kube-api-access-fq8b9\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-audit-dir\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457945 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgfc\" (UniqueName: \"kubernetes.io/projected/608093bb-ab9f-47bf-bf66-938266244574-kube-api-access-mqgfc\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458025 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvv4t\" (UniqueName: \"kubernetes.io/projected/77a18531-ffc7-42d9-bba7-78d72b032c39-kube-api-access-fvv4t\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458075 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-encryption-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458110 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxfx\" (UniqueName: \"kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458153 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe15858-3d24-48fb-b534-a4b484e027e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-encryption-config\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458221 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-node-pullsecrets\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458249 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458274 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-serving-cert\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458349 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-config\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-client\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458614 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-image-import-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ee1b04-33fb-452c-917e-ea08b3f489a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.458959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-serving-cert\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.459005 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zptc\" (UniqueName: \"kubernetes.io/projected/64b274f6-5293-4c0e-a51a-dca8518c5a40-kube-api-access-2zptc\") pod \"downloads-7954f5f757-bdrpp\" (UID: \"64b274f6-5293-4c0e-a51a-dca8518c5a40\") " pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.459035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-images\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457148 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5862"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.459467 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.459944 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.460435 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.460623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-images\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.457233 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.460862 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.461467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe15858-3d24-48fb-b534-a4b484e027e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.461710 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.462241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.462765 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.463420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.463720 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.462862 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-audit-dir\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.464714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.464785 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hwgzv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.464907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.465551 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qw9xb"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.465779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.465939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.466417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-audit\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.466615 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.466792 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bdrpp"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.467089 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.468385 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6xmqh"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.468940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-audit-policies\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.469136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608093bb-ab9f-47bf-bf66-938266244574-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.469148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.469210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8slw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.469722 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.469748 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.470031 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608093bb-ab9f-47bf-bf66-938266244574-audit-dir\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.470830 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.470929 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57848ff-da41-4c6a-9586-c57676b73c90-node-pullsecrets\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.470957 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.471471 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.471826 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.472280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-config\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.472536 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.473014 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.473849 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-etcd-client\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.473977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.474479 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dhnh8"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.475019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.475046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a18531-ffc7-42d9-bba7-78d72b032c39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.475810 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qpzns"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.476703 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe15858-3d24-48fb-b534-a4b484e027e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.476730 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.477971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-serving-cert\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.478055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.478311 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.479149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a18531-ffc7-42d9-bba7-78d72b032c39-config\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.479202 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f57848ff-da41-4c6a-9586-c57676b73c90-image-import-ca\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.479364 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.479488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.480497 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.480052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-encryption-config\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.479444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.481562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-serving-cert\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.482050 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.483097 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.484036 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.484030 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.485463 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.486126 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.486321 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608093bb-ab9f-47bf-bf66-938266244574-encryption-config\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.486787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-serving-cert\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.489123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.490149 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qb472"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.490727 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.493534 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.493552 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.493920 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-msjx7"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.494692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.497687 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.498403 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.499541 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.500061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f57848ff-da41-4c6a-9586-c57676b73c90-etcd-client\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.501996 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bz9jw"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.503866 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.504935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5862"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.506744 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zft44"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.507842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.508463 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.509809 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.511212 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.512558 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.513968 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qw9xb"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.514540 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.515118 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.517069 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.517227 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hwgzv"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.518617 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.523068 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zft44"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.526016 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.530227 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-msjx7"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.533107 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.535819 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.538295 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pwzb6"] Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.538856 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.552473 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.559574 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ee1b04-33fb-452c-917e-ea08b3f489a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.559626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjzv\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-kube-api-access-xcjzv\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.559666 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7ee1b04-33fb-452c-917e-ea08b3f489a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.559692 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.560824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7ee1b04-33fb-452c-917e-ea08b3f489a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.562974 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ee1b04-33fb-452c-917e-ea08b3f489a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.572785 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.591956 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.612759 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.632200 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.652921 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.672757 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.692525 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.712800 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.732450 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.752209 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.772470 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.797949 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.813052 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.832196 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.852819 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.873503 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.892896 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.932146 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.953397 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.972817 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:31:24 crc kubenswrapper[4698]: I0127 14:31:24.993031 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.012594 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.032718 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.053015 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.092313 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.113257 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.131863 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.152484 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.173336 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.192898 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.213177 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.232609 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.252801 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.272045 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.292287 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.311997 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.333298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.352356 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.372736 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.392430 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.411174 4698 request.go:700] Waited for 1.014927056s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.413164 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.432808 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.452992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.472336 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.492167 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.518118 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.533238 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.554247 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.572573 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.592018 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.612280 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.632404 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.652721 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.672081 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.692969 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.728026 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtwz\" (UniqueName: \"kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz\") pod \"controller-manager-879f6c89f-cj2hq\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.732996 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.766486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq56w\" (UniqueName: \"kubernetes.io/projected/dfe15858-3d24-48fb-b534-a4b484e027e3-kube-api-access-kq56w\") pod \"openshift-apiserver-operator-796bbdcf4f-hmtx5\" (UID: \"dfe15858-3d24-48fb-b534-a4b484e027e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.788320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8b9\" (UniqueName: \"kubernetes.io/projected/9d8c543e-8120-4d1f-b76c-5be16d35bf1d-kube-api-access-fq8b9\") pod \"authentication-operator-69f744f599-h864m\" (UID: \"9d8c543e-8120-4d1f-b76c-5be16d35bf1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.792829 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.812990 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.847724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgfc\" (UniqueName: \"kubernetes.io/projected/608093bb-ab9f-47bf-bf66-938266244574-kube-api-access-mqgfc\") pod \"apiserver-7bbb656c7d-qtjcb\" (UID: \"608093bb-ab9f-47bf-bf66-938266244574\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.852162 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.872261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.872305 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.892856 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.912659 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.933803 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.962303 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.964094 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.973987 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.976118 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:31:25 crc kubenswrapper[4698]: I0127 14:31:25.994935 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.013032 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.050163 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6cbm\" (UniqueName: \"kubernetes.io/projected/f57848ff-da41-4c6a-9586-c57676b73c90-kube-api-access-x6cbm\") pod \"apiserver-76f77b778f-z5f9l\" (UID: \"f57848ff-da41-4c6a-9586-c57676b73c90\") " pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.050750 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.052089 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.072917 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.093092 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.114920 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.132916 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.155263 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.165655 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvv4t\" (UniqueName: \"kubernetes.io/projected/77a18531-ffc7-42d9-bba7-78d72b032c39-kube-api-access-fvv4t\") pod \"machine-api-operator-5694c8668f-qpzns\" (UID: \"77a18531-ffc7-42d9-bba7-78d72b032c39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.187670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxfx\" (UniqueName: \"kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx\") pod \"oauth-openshift-558db77b4-x7rj5\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.207151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zptc\" (UniqueName: \"kubernetes.io/projected/64b274f6-5293-4c0e-a51a-dca8518c5a40-kube-api-access-2zptc\") pod \"downloads-7954f5f757-bdrpp\" (UID: \"64b274f6-5293-4c0e-a51a-dca8518c5a40\") " pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.212068 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.232483 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.252824 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.272339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.273219 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.293073 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.313938 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.330786 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.331848 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.333798 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.337607 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.338675 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h864m"] Jan 27 14:31:26 crc kubenswrapper[4698]: W0127 14:31:26.345800 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3d75e2_1fec_4458_9cb7_3472250b0b49.slice/crio-6a552a249acc3a977702c02a05385405da38caf9193983b37be9595f8841853e WatchSource:0}: Error finding container 6a552a249acc3a977702c02a05385405da38caf9193983b37be9595f8841853e: Status 404 returned error can't find the container with id 6a552a249acc3a977702c02a05385405da38caf9193983b37be9595f8841853e Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.355001 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.358590 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5f9l"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.372713 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.393817 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.414607 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.429385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.430740 4698 request.go:700] Waited for 1.870863935s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.448215 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qpzns"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.449675 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjzv\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-kube-api-access-xcjzv\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.467711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7ee1b04-33fb-452c-917e-ea08b3f489a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d9s2k\" (UID: \"f7ee1b04-33fb-452c-917e-ea08b3f489a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.482753 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584427 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhq7\" (UniqueName: \"kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584520 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-default-certificate\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584583 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f4a113f-a4c9-423f-8d34-fb05c1f776af-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584608 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b9797e-289f-45f3-8707-bc899a687aa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584658 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584706 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584731 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-stats-auth\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584753 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.584975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585028 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4023d55-2b87-419a-be3e-bab987ba0841-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585428 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b8fe528-a188-48bd-8555-6dd2798122fe-machine-approver-tls\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585702 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-serving-cert\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-config\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.585855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4a113f-a4c9-423f-8d34-fb05c1f776af-config\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586514 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586613 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsfb\" (UniqueName: \"kubernetes.io/projected/557a3dbe-140e-4e30-bad7-f2c7e828d446-kube-api-access-4bsfb\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blptb\" (UniqueName: \"kubernetes.io/projected/f4023d55-2b87-419a-be3e-bab987ba0841-kube-api-access-blptb\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586875 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-service-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586895 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-client\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586915 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cd2a60-54ce-46a5-96cd-53bb078fa804-service-ca-bundle\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-metrics-certs\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.586986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78829bfe-d678-496f-8bf5-28b5008758f0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587032 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4a113f-a4c9-423f-8d34-fb05c1f776af-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587065 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-trusted-ca\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587116 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587138 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b718e9-6bf1-4e81-91ce-feea3116fd97-serving-cert\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrzd\" (UniqueName: \"kubernetes.io/projected/0a93337a-414f-4a6b-9cdb-4cb56092851a-kube-api-access-wlrzd\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78829bfe-d678-496f-8bf5-28b5008758f0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgk6\" (UniqueName: \"kubernetes.io/projected/13b718e9-6bf1-4e81-91ce-feea3116fd97-kube-api-access-5vgk6\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78829bfe-d678-496f-8bf5-28b5008758f0-config\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587500 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4023d55-2b87-419a-be3e-bab987ba0841-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92hb\" (UniqueName: \"kubernetes.io/projected/88376a00-d5b2-4d08-ae81-097d8134df27-kube-api-access-g92hb\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587587 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587611 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b9797e-289f-45f3-8707-bc899a687aa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587653 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587675 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcjq\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-kube-api-access-6jcjq\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5ck\" (UniqueName: \"kubernetes.io/projected/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-kube-api-access-4k5ck\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587861 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587899 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwth\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-config\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587966 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a93337a-414f-4a6b-9cdb-4cb56092851a-metrics-tls\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.587989 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6622\" (UniqueName: \"kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.588062 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmg9\" (UniqueName: \"kubernetes.io/projected/79cd2a60-54ce-46a5-96cd-53bb078fa804-kube-api-access-8rmg9\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.588097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.588121 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsz8b\" (UniqueName: \"kubernetes.io/projected/7b8fe528-a188-48bd-8555-6dd2798122fe-kube-api-access-tsz8b\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.588156 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/557a3dbe-140e-4e30-bad7-f2c7e828d446-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.588217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-auth-proxy-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: E0127 14:31:26.588705 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.088684376 +0000 UTC m=+142.765462031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.593431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" event={"ID":"77a18531-ffc7-42d9-bba7-78d72b032c39","Type":"ContainerStarted","Data":"51d0301c5ca889db45d40150fb8fae33f0072971fcce77ede694e10b9657ed7d"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.595556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" event={"ID":"f57848ff-da41-4c6a-9586-c57676b73c90","Type":"ContainerStarted","Data":"6cefc728332530671d1427848a15e477eaacc56286e33c6be921bba13901427e"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.596791 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" event={"ID":"dfe15858-3d24-48fb-b534-a4b484e027e3","Type":"ContainerStarted","Data":"cd18dbd2ffdf398af601658ddfb078739ff2b3eb35837482f3a45ceffc4d7c5b"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.598653 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" event={"ID":"9d8c543e-8120-4d1f-b76c-5be16d35bf1d","Type":"ContainerStarted","Data":"24329442a889222e0c571effa7b95470db5e801f23d4b80e064171b256c0b568"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.599610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" event={"ID":"608093bb-ab9f-47bf-bf66-938266244574","Type":"ContainerStarted","Data":"1253658ef1635f9b07a14b54aef5e2e61ba235c789a99cf5edf29f659b1cce44"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.600800 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" event={"ID":"3d3d75e2-1fec-4458-9cb7-3472250b0b49","Type":"ContainerStarted","Data":"6a552a249acc3a977702c02a05385405da38caf9193983b37be9595f8841853e"} Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.682304 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:26 crc kubenswrapper[4698]: E0127 14:31:26.690347 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.190297613 +0000 UTC m=+142.867075078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690463 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-csi-data-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690494 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690519 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-cabundle\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8de0774-06e2-4438-b5ef-ad70f998b22c-config\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690595 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-service-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-client\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-key\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690686 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9bc\" (UniqueName: \"kubernetes.io/projected/83d47a27-37a3-420c-af6f-e02bcd53ec1a-kube-api-access-gr9bc\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690710 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrrp\" (UniqueName: \"kubernetes.io/projected/f8de0774-06e2-4438-b5ef-ad70f998b22c-kube-api-access-vrrrp\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690735 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8g4\" (UniqueName: \"kubernetes.io/projected/97edac57-d351-482c-919b-d12bce71f637-kube-api-access-7c8g4\") pod \"migrator-59844c95c7-b8t54\" (UID: \"97edac57-d351-482c-919b-d12bce71f637\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-metrics-certs\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.692291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.692438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.692522 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.692532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-service-ca\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.692696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.690837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a66b6ff-8485-46ca-8a12-ca7a75b63596-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4a113f-a4c9-423f-8d34-fb05c1f776af-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695729 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2vl\" (UniqueName: \"kubernetes.io/projected/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-kube-api-access-dq2vl\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-trusted-ca\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695771 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-etcd-client\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695785 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695835 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b718e9-6bf1-4e81-91ce-feea3116fd97-serving-cert\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-srv-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695872 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98kp\" (UniqueName: \"kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8de0774-06e2-4438-b5ef-ad70f998b22c-serving-cert\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695916 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrzd\" (UniqueName: \"kubernetes.io/projected/0a93337a-414f-4a6b-9cdb-4cb56092851a-kube-api-access-wlrzd\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78829bfe-d678-496f-8bf5-28b5008758f0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgk6\" (UniqueName: \"kubernetes.io/projected/13b718e9-6bf1-4e81-91ce-feea3116fd97-kube-api-access-5vgk6\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78829bfe-d678-496f-8bf5-28b5008758f0-config\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.695997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92hb\" (UniqueName: \"kubernetes.io/projected/88376a00-d5b2-4d08-ae81-097d8134df27-kube-api-access-g92hb\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696019 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b9797e-289f-45f3-8707-bc899a687aa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5ck\" (UniqueName: \"kubernetes.io/projected/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-kube-api-access-4k5ck\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbdc\" (UniqueName: \"kubernetes.io/projected/b744b41b-da1c-44d2-a538-e1d8bfe5c144-kube-api-access-9hbdc\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696078 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-metrics-certs\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48753b5b-f7f1-468a-982f-c7defe92fdcd-cert\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696152 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bfd27e63-9504-4961-90d1-8c0056be6f31-proxy-tls\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696195 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-config\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696216 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-mountpoint-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-images\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696276 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a93337a-414f-4a6b-9cdb-4cb56092851a-metrics-tls\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsz8b\" (UniqueName: \"kubernetes.io/projected/7b8fe528-a188-48bd-8555-6dd2798122fe-kube-api-access-tsz8b\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/557a3dbe-140e-4e30-bad7-f2c7e828d446-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696409 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-auth-proxy-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-registration-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696530 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b9797e-289f-45f3-8707-bc899a687aa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c6e82-2e72-4776-b801-2cf427b72696-tmpfs\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25d8\" (UniqueName: \"kubernetes.io/projected/48753b5b-f7f1-468a-982f-c7defe92fdcd-kube-api-access-k25d8\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696828 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-apiservice-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b8fe528-a188-48bd-8555-6dd2798122fe-machine-approver-tls\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.696979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-config\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697027 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9pc\" (UniqueName: \"kubernetes.io/projected/a11f89bd-147f-4b21-b83f-6b86727ecc2e-kube-api-access-gn9pc\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697080 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697115 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-webhook-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697158 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9qq\" (UniqueName: \"kubernetes.io/projected/3a66b6ff-8485-46ca-8a12-ca7a75b63596-kube-api-access-xh9qq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697218 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bsfb\" (UniqueName: \"kubernetes.io/projected/557a3dbe-140e-4e30-bad7-f2c7e828d446-kube-api-access-4bsfb\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlrb\" (UniqueName: \"kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blptb\" (UniqueName: \"kubernetes.io/projected/f4023d55-2b87-419a-be3e-bab987ba0841-kube-api-access-blptb\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697299 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cd2a60-54ce-46a5-96cd-53bb078fa804-service-ca-bundle\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697335 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/37564592-b4e8-47fd-8b7f-b1d26254efa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697415 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78829bfe-d678-496f-8bf5-28b5008758f0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/019c0321-025d-4bf5-a48c-fd0e707b797c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697479 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlt9d\" (UniqueName: \"kubernetes.io/projected/8b795163-b78c-4a56-9181-3243d6684eed-kube-api-access-zlt9d\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697505 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-srv-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697531 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27510e33-967a-4675-b5bf-afd141421399-proxy-tls\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697560 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697593 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697622 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlktk\" (UniqueName: \"kubernetes.io/projected/019c0321-025d-4bf5-a48c-fd0e707b797c-kube-api-access-xlktk\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697675 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-plugins-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697730 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4023d55-2b87-419a-be3e-bab987ba0841-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697808 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27510e33-967a-4675-b5bf-afd141421399-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697881 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcjq\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-kube-api-access-6jcjq\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwth\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697932 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-socket-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697957 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b744b41b-da1c-44d2-a538-e1d8bfe5c144-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.697985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6622\" (UniqueName: \"kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698002 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88376a00-d5b2-4d08-ae81-097d8134df27-config\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698111 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmg9\" (UniqueName: \"kubernetes.io/projected/79cd2a60-54ce-46a5-96cd-53bb078fa804-kube-api-access-8rmg9\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b795163-b78c-4a56-9181-3243d6684eed-config-volume\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65g7\" (UniqueName: \"kubernetes.io/projected/37564592-b4e8-47fd-8b7f-b1d26254efa0-kube-api-access-v65g7\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698226 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdx78\" (UniqueName: \"kubernetes.io/projected/682a4abd-5c9f-4b58-8090-9c78f10d3577-kube-api-access-xdx78\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698256 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhq7\" (UniqueName: \"kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698602 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f4a113f-a4c9-423f-8d34-fb05c1f776af-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698628 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-default-certificate\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698676 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-node-bootstrap-token\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698703 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hxp\" (UniqueName: \"kubernetes.io/projected/27510e33-967a-4675-b5bf-afd141421399-kube-api-access-99hxp\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698784 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-stats-auth\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wc7\" (UniqueName: \"kubernetes.io/projected/ff0c6e82-2e72-4776-b801-2cf427b72696-kube-api-access-z7wc7\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698859 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7m9j\" (UniqueName: \"kubernetes.io/projected/bfd27e63-9504-4961-90d1-8c0056be6f31-kube-api-access-x7m9j\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4023d55-2b87-419a-be3e-bab987ba0841-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.698937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-certs\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.699181 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-trusted-ca\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.699340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4023d55-2b87-419a-be3e-bab987ba0841-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.700541 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cd2a60-54ce-46a5-96cd-53bb078fa804-service-ca-bundle\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.700660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.703956 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b795163-b78c-4a56-9181-3243d6684eed-metrics-tls\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704058 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a66b6ff-8485-46ca-8a12-ca7a75b63596-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704098 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-serving-cert\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4a113f-a4c9-423f-8d34-fb05c1f776af-config\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl57\" (UniqueName: \"kubernetes.io/projected/912888aa-9826-4be1-a96a-315508a84cf9-kube-api-access-pxl57\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.704382 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-auth-proxy-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.705339 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.705554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.705749 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.705881 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.706355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b718e9-6bf1-4e81-91ce-feea3116fd97-serving-cert\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.706707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78829bfe-d678-496f-8bf5-28b5008758f0-config\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.707053 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4a113f-a4c9-423f-8d34-fb05c1f776af-config\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.708175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.708584 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.708751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.709092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78829bfe-d678-496f-8bf5-28b5008758f0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: E0127 14:31:26.709201 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.209183987 +0000 UTC m=+142.885961452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.710242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-default-certificate\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.710236 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8fe528-a188-48bd-8555-6dd2798122fe-config\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.710741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.711316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.711468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a93337a-414f-4a6b-9cdb-4cb56092851a-metrics-tls\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.712964 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/557a3dbe-140e-4e30-bad7-f2c7e828d446-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.713495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b8fe528-a188-48bd-8555-6dd2798122fe-machine-approver-tls\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.714038 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88376a00-d5b2-4d08-ae81-097d8134df27-serving-cert\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.714591 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/79cd2a60-54ce-46a5-96cd-53bb078fa804-stats-auth\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.716261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.716596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4023d55-2b87-419a-be3e-bab987ba0841-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.728542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrzd\" (UniqueName: \"kubernetes.io/projected/0a93337a-414f-4a6b-9cdb-4cb56092851a-kube-api-access-wlrzd\") pod \"dns-operator-744455d44c-dhnh8\" (UID: \"0a93337a-414f-4a6b-9cdb-4cb56092851a\") " pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.751443 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78829bfe-d678-496f-8bf5-28b5008758f0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d7vnt\" (UID: \"78829bfe-d678-496f-8bf5-28b5008758f0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.800412 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b718e9-6bf1-4e81-91ce-feea3116fd97-config\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.801441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b9797e-289f-45f3-8707-bc899a687aa1-trusted-ca\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.801461 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcjq\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-kube-api-access-6jcjq\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.801447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b9797e-289f-45f3-8707-bc899a687aa1-metrics-tls\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.804227 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4a113f-a4c9-423f-8d34-fb05c1f776af-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806018 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806181 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbdc\" (UniqueName: \"kubernetes.io/projected/b744b41b-da1c-44d2-a538-e1d8bfe5c144-kube-api-access-9hbdc\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48753b5b-f7f1-468a-982f-c7defe92fdcd-cert\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806258 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bfd27e63-9504-4961-90d1-8c0056be6f31-proxy-tls\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-mountpoint-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-images\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806398 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-registration-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806454 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c6e82-2e72-4776-b801-2cf427b72696-tmpfs\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25d8\" (UniqueName: \"kubernetes.io/projected/48753b5b-f7f1-468a-982f-c7defe92fdcd-kube-api-access-k25d8\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-apiservice-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn9pc\" (UniqueName: \"kubernetes.io/projected/a11f89bd-147f-4b21-b83f-6b86727ecc2e-kube-api-access-gn9pc\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806558 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-webhook-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9qq\" (UniqueName: \"kubernetes.io/projected/3a66b6ff-8485-46ca-8a12-ca7a75b63596-kube-api-access-xh9qq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806606 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlrb\" (UniqueName: \"kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806622 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/37564592-b4e8-47fd-8b7f-b1d26254efa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/019c0321-025d-4bf5-a48c-fd0e707b797c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806705 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27510e33-967a-4675-b5bf-afd141421399-proxy-tls\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806721 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806739 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlt9d\" (UniqueName: \"kubernetes.io/projected/8b795163-b78c-4a56-9181-3243d6684eed-kube-api-access-zlt9d\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-srv-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806775 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806794 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlktk\" (UniqueName: \"kubernetes.io/projected/019c0321-025d-4bf5-a48c-fd0e707b797c-kube-api-access-xlktk\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806825 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-plugins-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27510e33-967a-4675-b5bf-afd141421399-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-socket-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b744b41b-da1c-44d2-a538-e1d8bfe5c144-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806950 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b795163-b78c-4a56-9181-3243d6684eed-config-volume\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65g7\" (UniqueName: \"kubernetes.io/projected/37564592-b4e8-47fd-8b7f-b1d26254efa0-kube-api-access-v65g7\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.806998 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdx78\" (UniqueName: \"kubernetes.io/projected/682a4abd-5c9f-4b58-8090-9c78f10d3577-kube-api-access-xdx78\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807047 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hxp\" (UniqueName: \"kubernetes.io/projected/27510e33-967a-4675-b5bf-afd141421399-kube-api-access-99hxp\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807067 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-node-bootstrap-token\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wc7\" (UniqueName: \"kubernetes.io/projected/ff0c6e82-2e72-4776-b801-2cf427b72696-kube-api-access-z7wc7\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807101 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7m9j\" (UniqueName: \"kubernetes.io/projected/bfd27e63-9504-4961-90d1-8c0056be6f31-kube-api-access-x7m9j\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-certs\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807133 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a66b6ff-8485-46ca-8a12-ca7a75b63596-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807151 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b795163-b78c-4a56-9181-3243d6684eed-metrics-tls\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl57\" (UniqueName: \"kubernetes.io/projected/912888aa-9826-4be1-a96a-315508a84cf9-kube-api-access-pxl57\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-csi-data-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-cabundle\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807237 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8de0774-06e2-4438-b5ef-ad70f998b22c-config\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807255 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-key\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr9bc\" (UniqueName: \"kubernetes.io/projected/83d47a27-37a3-420c-af6f-e02bcd53ec1a-kube-api-access-gr9bc\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807296 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrrp\" (UniqueName: \"kubernetes.io/projected/f8de0774-06e2-4438-b5ef-ad70f998b22c-kube-api-access-vrrrp\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8g4\" (UniqueName: \"kubernetes.io/projected/97edac57-d351-482c-919b-d12bce71f637-kube-api-access-7c8g4\") pod \"migrator-59844c95c7-b8t54\" (UID: \"97edac57-d351-482c-919b-d12bce71f637\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807341 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a66b6ff-8485-46ca-8a12-ca7a75b63596-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2vl\" (UniqueName: \"kubernetes.io/projected/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-kube-api-access-dq2vl\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807387 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-srv-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807403 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98kp\" (UniqueName: \"kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.807419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8de0774-06e2-4438-b5ef-ad70f998b22c-serving-cert\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: E0127 14:31:26.808297 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.308268538 +0000 UTC m=+142.985046203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.808810 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a66b6ff-8485-46ca-8a12-ca7a75b63596-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.808966 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27510e33-967a-4675-b5bf-afd141421399-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.809287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-socket-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.810661 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.813295 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6622\" (UniqueName: \"kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622\") pod \"route-controller-manager-6576b87f9c-p5fgd\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.814205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.817365 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.818242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-node-bootstrap-token\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.818974 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-srv-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.820077 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b795163-b78c-4a56-9181-3243d6684eed-config-volume\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.820455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b744b41b-da1c-44d2-a538-e1d8bfe5c144-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.820949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b795163-b78c-4a56-9181-3243d6684eed-metrics-tls\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821206 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a11f89bd-147f-4b21-b83f-6b86727ecc2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821221 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-csi-data-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821262 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821270 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-plugins-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff0c6e82-2e72-4776-b801-2cf427b72696-tmpfs\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.821790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-registration-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.822015 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-mountpoint-dir\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.822340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-cabundle\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.822391 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.823963 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwth\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.824145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfd27e63-9504-4961-90d1-8c0056be6f31-images\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.825171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8de0774-06e2-4438-b5ef-ad70f998b22c-config\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.825308 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a66b6ff-8485-46ca-8a12-ca7a75b63596-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.826127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48753b5b-f7f1-468a-982f-c7defe92fdcd-cert\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.827082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83d47a27-37a3-420c-af6f-e02bcd53ec1a-signing-key\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.828470 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.833762 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.834246 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27510e33-967a-4675-b5bf-afd141421399-proxy-tls\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.836479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/37564592-b4e8-47fd-8b7f-b1d26254efa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.837038 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8de0774-06e2-4438-b5ef-ad70f998b22c-serving-cert\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.837124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.838012 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/019c0321-025d-4bf5-a48c-fd0e707b797c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.840407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/912888aa-9826-4be1-a96a-315508a84cf9-srv-cert\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.844319 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-webhook-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.845123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff0c6e82-2e72-4776-b801-2cf427b72696-apiservice-cert\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.847435 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.848441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bfd27e63-9504-4961-90d1-8c0056be6f31-proxy-tls\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.853179 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmg9\" (UniqueName: \"kubernetes.io/projected/79cd2a60-54ce-46a5-96cd-53bb078fa804-kube-api-access-8rmg9\") pod \"router-default-5444994796-8bg4r\" (UID: \"79cd2a60-54ce-46a5-96cd-53bb078fa804\") " pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.854471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/682a4abd-5c9f-4b58-8090-9c78f10d3577-certs\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.868124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsz8b\" (UniqueName: \"kubernetes.io/projected/7b8fe528-a188-48bd-8555-6dd2798122fe-kube-api-access-tsz8b\") pod \"machine-approver-56656f9798-cfwgg\" (UID: \"7b8fe528-a188-48bd-8555-6dd2798122fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.887930 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.909079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:26 crc kubenswrapper[4698]: E0127 14:31:26.909503 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.409480814 +0000 UTC m=+143.086258339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.910126 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhq7\" (UniqueName: \"kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7\") pod \"console-f9d7485db-cvnrn\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.926508 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f4a113f-a4c9-423f-8d34-fb05c1f776af-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjdln\" (UID: \"5f4a113f-a4c9-423f-8d34-fb05c1f776af\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.926767 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5ck\" (UniqueName: \"kubernetes.io/projected/3f2c6212-a0a5-4d50-aa1c-63fc63296dab-kube-api-access-4k5ck\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5cmj\" (UID: \"3f2c6212-a0a5-4d50-aa1c-63fc63296dab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.930460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.949608 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgk6\" (UniqueName: \"kubernetes.io/projected/13b718e9-6bf1-4e81-91ce-feea3116fd97-kube-api-access-5vgk6\") pod \"console-operator-58897d9998-bz9jw\" (UID: \"13b718e9-6bf1-4e81-91ce-feea3116fd97\") " pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.972207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92hb\" (UniqueName: \"kubernetes.io/projected/88376a00-d5b2-4d08-ae81-097d8134df27-kube-api-access-g92hb\") pod \"etcd-operator-b45778765-6xmqh\" (UID: \"88376a00-d5b2-4d08-ae81-097d8134df27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.989546 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.990209 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bdrpp"] Jan 27 14:31:26 crc kubenswrapper[4698]: I0127 14:31:26.997995 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.004568 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.010030 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.010424 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.510408163 +0000 UTC m=+143.187185628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.011039 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96b9797e-289f-45f3-8707-bc899a687aa1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qb472\" (UID: \"96b9797e-289f-45f3-8707-bc899a687aa1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.011895 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.020239 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.028442 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.030232 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bsfb\" (UniqueName: \"kubernetes.io/projected/557a3dbe-140e-4e30-bad7-f2c7e828d446-kube-api-access-4bsfb\") pod \"cluster-samples-operator-665b6dd947-6kdpx\" (UID: \"557a3dbe-140e-4e30-bad7-f2c7e828d446\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.044513 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.044626 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.052487 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blptb\" (UniqueName: \"kubernetes.io/projected/f4023d55-2b87-419a-be3e-bab987ba0841-kube-api-access-blptb\") pod \"openshift-config-operator-7777fb866f-m8slw\" (UID: \"f4023d55-2b87-419a-be3e-bab987ba0841\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.052868 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.060956 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:27 crc kubenswrapper[4698]: W0127 14:31:27.066673 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7ee1b04_33fb_452c_917e_ea08b3f489a4.slice/crio-cbe70d98c15855b9b5807597c29b2e3ffa23196873235f54f80c14e3e32366c1 WatchSource:0}: Error finding container cbe70d98c15855b9b5807597c29b2e3ffa23196873235f54f80c14e3e32366c1: Status 404 returned error can't find the container with id cbe70d98c15855b9b5807597c29b2e3ffa23196873235f54f80c14e3e32366c1 Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.071857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdx78\" (UniqueName: \"kubernetes.io/projected/682a4abd-5c9f-4b58-8090-9c78f10d3577-kube-api-access-xdx78\") pod \"machine-config-server-pwzb6\" (UID: \"682a4abd-5c9f-4b58-8090-9c78f10d3577\") " pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.093709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn9pc\" (UniqueName: \"kubernetes.io/projected/a11f89bd-147f-4b21-b83f-6b86727ecc2e-kube-api-access-gn9pc\") pod \"olm-operator-6b444d44fb-qfxjx\" (UID: \"a11f89bd-147f-4b21-b83f-6b86727ecc2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.111705 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.112166 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.612093422 +0000 UTC m=+143.288870957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.131366 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7m9j\" (UniqueName: \"kubernetes.io/projected/bfd27e63-9504-4961-90d1-8c0056be6f31-kube-api-access-x7m9j\") pod \"machine-config-operator-74547568cd-wcmrv\" (UID: \"bfd27e63-9504-4961-90d1-8c0056be6f31\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.133363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbdc\" (UniqueName: \"kubernetes.io/projected/b744b41b-da1c-44d2-a538-e1d8bfe5c144-kube-api-access-9hbdc\") pod \"multus-admission-controller-857f4d67dd-qw9xb\" (UID: \"b744b41b-da1c-44d2-a538-e1d8bfe5c144\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.154419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wc7\" (UniqueName: \"kubernetes.io/projected/ff0c6e82-2e72-4776-b801-2cf427b72696-kube-api-access-z7wc7\") pod \"packageserver-d55dfcdfc-pcr8w\" (UID: \"ff0c6e82-2e72-4776-b801-2cf427b72696\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.168444 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.175528 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl57\" (UniqueName: \"kubernetes.io/projected/912888aa-9826-4be1-a96a-315508a84cf9-kube-api-access-pxl57\") pod \"catalog-operator-68c6474976-c49dc\" (UID: \"912888aa-9826-4be1-a96a-315508a84cf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.210106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65g7\" (UniqueName: \"kubernetes.io/projected/37564592-b4e8-47fd-8b7f-b1d26254efa0-kube-api-access-v65g7\") pod \"package-server-manager-789f6589d5-p4z9k\" (UID: \"37564592-b4e8-47fd-8b7f-b1d26254efa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.217549 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.218421 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.718379151 +0000 UTC m=+143.395156766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.221257 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9qq\" (UniqueName: \"kubernetes.io/projected/3a66b6ff-8485-46ca-8a12-ca7a75b63596-kube-api-access-xh9qq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnfnv\" (UID: \"3a66b6ff-8485-46ca-8a12-ca7a75b63596\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.236859 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pwzb6" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.239751 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.241160 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlrb\" (UniqueName: \"kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb\") pod \"collect-profiles-29492070-z2sp4\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.241909 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dhnh8"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.249192 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hxp\" (UniqueName: \"kubernetes.io/projected/27510e33-967a-4675-b5bf-afd141421399-kube-api-access-99hxp\") pod \"machine-config-controller-84d6567774-vrtzk\" (UID: \"27510e33-967a-4675-b5bf-afd141421399\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.251766 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.258905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.280502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27a960e1-9cd1-41a9-ac06-0ac66ecb12f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g4htw\" (UID: \"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.303015 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlt9d\" (UniqueName: \"kubernetes.io/projected/8b795163-b78c-4a56-9181-3243d6684eed-kube-api-access-zlt9d\") pod \"dns-default-msjx7\" (UID: \"8b795163-b78c-4a56-9181-3243d6684eed\") " pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.328752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.329264 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.829249649 +0000 UTC m=+143.506027124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.360464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8g4\" (UniqueName: \"kubernetes.io/projected/97edac57-d351-482c-919b-d12bce71f637-kube-api-access-7c8g4\") pod \"migrator-59844c95c7-b8t54\" (UID: \"97edac57-d351-482c-919b-d12bce71f637\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.361241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr9bc\" (UniqueName: \"kubernetes.io/projected/83d47a27-37a3-420c-af6f-e02bcd53ec1a-kube-api-access-gr9bc\") pod \"service-ca-9c57cc56f-c5862\" (UID: \"83d47a27-37a3-420c-af6f-e02bcd53ec1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.361626 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2vl\" (UniqueName: \"kubernetes.io/projected/d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8-kube-api-access-dq2vl\") pod \"csi-hostpathplugin-zft44\" (UID: \"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8\") " pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.367888 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.371307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrrp\" (UniqueName: \"kubernetes.io/projected/f8de0774-06e2-4438-b5ef-ad70f998b22c-kube-api-access-vrrrp\") pod \"service-ca-operator-777779d784-h8hj7\" (UID: \"f8de0774-06e2-4438-b5ef-ad70f998b22c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.378079 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.383230 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.394425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25d8\" (UniqueName: \"kubernetes.io/projected/48753b5b-f7f1-468a-982f-c7defe92fdcd-kube-api-access-k25d8\") pod \"ingress-canary-hwgzv\" (UID: \"48753b5b-f7f1-468a-982f-c7defe92fdcd\") " pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.396819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.410081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.415028 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98kp\" (UniqueName: \"kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp\") pod \"marketplace-operator-79b997595-kwgll\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.416209 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.423678 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.429827 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.430324 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:27.930300621 +0000 UTC m=+143.607078086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.431741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlktk\" (UniqueName: \"kubernetes.io/projected/019c0321-025d-4bf5-a48c-fd0e707b797c-kube-api-access-xlktk\") pod \"control-plane-machine-set-operator-78cbb6b69f-mswzh\" (UID: \"019c0321-025d-4bf5-a48c-fd0e707b797c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.432033 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.438445 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.447799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.455234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.465867 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.476228 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hwgzv" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.483941 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5862" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.491795 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.494578 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.527087 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zft44" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.531409 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.531785 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.031774285 +0000 UTC m=+143.708551750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.544491 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qw9xb"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.589212 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.606970 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bz9jw"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.611782 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.611865 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" event={"ID":"77a18531-ffc7-42d9-bba7-78d72b032c39","Type":"ContainerStarted","Data":"e13f22afb31fbd0830967a03ed4791773d06a81e32b14fc05577a94b5348ff22"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.613157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" event={"ID":"3d3d75e2-1fec-4458-9cb7-3472250b0b49","Type":"ContainerStarted","Data":"333c74fc68d1eb86a37b571c39e30f2380edb16ccb8cb54acf8336f12fc0f43e"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.614044 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" event={"ID":"9d8c543e-8120-4d1f-b76c-5be16d35bf1d","Type":"ContainerStarted","Data":"4cc273610444d40fae2e4e5195d8ce76b84238bc052e1a6d5287e52194e69914"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.614610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8bg4r" event={"ID":"79cd2a60-54ce-46a5-96cd-53bb078fa804","Type":"ContainerStarted","Data":"df58f43e4e5a92a759610ec5d0e547e66cac5d63e48536aeeb2fec1bb390090f"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.615247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" event={"ID":"7b8fe528-a188-48bd-8555-6dd2798122fe","Type":"ContainerStarted","Data":"fd25b3d70cba4f6ed91353cec12dfac929d5ac85a34697521d26c6f6d765c8ea"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.618185 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" event={"ID":"7a699460-e5aa-401d-b2c4-003604099924","Type":"ContainerStarted","Data":"1fd4755c4a0ab20d0cd1d5db99b985ee96ab005a5ff4026c22647173f90cfd55"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.620109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bdrpp" event={"ID":"64b274f6-5293-4c0e-a51a-dca8518c5a40","Type":"ContainerStarted","Data":"6abb4fd8ec3460640556110287e6217daa855a2f9a96678499526c907e6348fd"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.623819 4698 generic.go:334] "Generic (PLEG): container finished" podID="f57848ff-da41-4c6a-9586-c57676b73c90" containerID="5121e5bfd0f3acf9b8027529c1f8069b43ba5ea6bbf83d7ff3530e0b1c4646aa" exitCode=0 Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.623879 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" event={"ID":"f57848ff-da41-4c6a-9586-c57676b73c90","Type":"ContainerDied","Data":"5121e5bfd0f3acf9b8027529c1f8069b43ba5ea6bbf83d7ff3530e0b1c4646aa"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.626148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" event={"ID":"dfe15858-3d24-48fb-b534-a4b484e027e3","Type":"ContainerStarted","Data":"f0fae7c50cc3a5dbfd8d549170804de5619ff7802074a4e68be2d3d861de4730"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.627054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" event={"ID":"0a93337a-414f-4a6b-9cdb-4cb56092851a","Type":"ContainerStarted","Data":"98632aae007e9691668996801c718c9fbbd4f22945c30e0d9ddcfc094aa30176"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.629069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" event={"ID":"f7ee1b04-33fb-452c-917e-ea08b3f489a4","Type":"ContainerStarted","Data":"cbe70d98c15855b9b5807597c29b2e3ffa23196873235f54f80c14e3e32366c1"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.630005 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pwzb6" event={"ID":"682a4abd-5c9f-4b58-8090-9c78f10d3577","Type":"ContainerStarted","Data":"2f5f240390d4cec83a653721183f0f03bbca526ae6c62feaf07dd45ec4602df7"} Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.632213 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.632394 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.132362055 +0000 UTC m=+143.809139530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.632719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.633223 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.133206607 +0000 UTC m=+143.809984152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.670978 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.691445 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.733670 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.734205 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.234184877 +0000 UTC m=+143.910962342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.816478 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.826344 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qb472"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.835837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.836182 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.336168813 +0000 UTC m=+144.012946278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.864747 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6xmqh"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.879168 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8slw"] Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.883700 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx"] Jan 27 14:31:27 crc kubenswrapper[4698]: W0127 14:31:27.914244 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b9797e_289f_45f3_8707_bc899a687aa1.slice/crio-74695ed1f884aa25c5163238f97ddb1a4b4b1bab7ad6d2b3a3967ca5f60eef30 WatchSource:0}: Error finding container 74695ed1f884aa25c5163238f97ddb1a4b4b1bab7ad6d2b3a3967ca5f60eef30: Status 404 returned error can't find the container with id 74695ed1f884aa25c5163238f97ddb1a4b4b1bab7ad6d2b3a3967ca5f60eef30 Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.937086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.937308 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.437276337 +0000 UTC m=+144.114053812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: I0127 14:31:27.939013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:27 crc kubenswrapper[4698]: E0127 14:31:27.939369 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.439352661 +0000 UTC m=+144.116130126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:27 crc kubenswrapper[4698]: W0127 14:31:27.972878 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88376a00_d5b2_4d08_ae81_097d8134df27.slice/crio-db7597eca314aa952eb14438e4b26e7511930b296f650704f86e183da4027765 WatchSource:0}: Error finding container db7597eca314aa952eb14438e4b26e7511930b296f650704f86e183da4027765: Status 404 returned error can't find the container with id db7597eca314aa952eb14438e4b26e7511930b296f650704f86e183da4027765 Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.040247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.040450 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.540425684 +0000 UTC m=+144.217203159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.040524 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.040957 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.540946527 +0000 UTC m=+144.217724002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.141235 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.141566 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.641538927 +0000 UTC m=+144.318316412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.143585 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw"] Jan 27 14:31:28 crc kubenswrapper[4698]: W0127 14:31:28.154339 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a960e1_9cd1_41a9_ac06_0ac66ecb12f1.slice/crio-3f59e04dccd452c78797de2145bc837867288a9f60e90051d1bc0cc437705a1b WatchSource:0}: Error finding container 3f59e04dccd452c78797de2145bc837867288a9f60e90051d1bc0cc437705a1b: Status 404 returned error can't find the container with id 3f59e04dccd452c78797de2145bc837867288a9f60e90051d1bc0cc437705a1b Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.169348 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.244760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.245116 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.745100915 +0000 UTC m=+144.421878380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.349066 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.349517 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.849500235 +0000 UTC m=+144.526277700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.451709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.452127 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:28.952106308 +0000 UTC m=+144.628883783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.463182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.554164 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.554595 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.054575787 +0000 UTC m=+144.731353252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.563447 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.643882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" event={"ID":"3f2c6212-a0a5-4d50-aa1c-63fc63296dab","Type":"ContainerStarted","Data":"95791def16bdc606a8100a911252f8daa855748aba3d679ca44f5e996326a33f"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.648531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bdrpp" event={"ID":"64b274f6-5293-4c0e-a51a-dca8518c5a40","Type":"ContainerStarted","Data":"95b352ab240f637939137882dfdfe55e57ef9176f53c2f7301858e50c6dcfdae"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.651840 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.653545 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.653600 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.656822 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.657330 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.157314434 +0000 UTC m=+144.834091899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.661718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" event={"ID":"7a699460-e5aa-401d-b2c4-003604099924","Type":"ContainerStarted","Data":"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.662313 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.672926 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x7rj5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.673003 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.675971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" event={"ID":"557a3dbe-140e-4e30-bad7-f2c7e828d446","Type":"ContainerStarted","Data":"f5c1b0e3794ea69b1aa8a20d1b833974623d98a458c625ef1605782c7145d41b"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.679011 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" event={"ID":"5f4a113f-a4c9-423f-8d34-fb05c1f776af","Type":"ContainerStarted","Data":"7a21a9d052e9e8000546605415340c05adef43c413b7905b4def06b454f3bc7b"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.684044 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" event={"ID":"b744b41b-da1c-44d2-a538-e1d8bfe5c144","Type":"ContainerStarted","Data":"ff096a1becb2c7194f31df340cc47a4cd48769fd1f6170710d0948fc0fc5db8c"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.688939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" event={"ID":"96b9797e-289f-45f3-8707-bc899a687aa1","Type":"ContainerStarted","Data":"74695ed1f884aa25c5163238f97ddb1a4b4b1bab7ad6d2b3a3967ca5f60eef30"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.702592 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" event={"ID":"13b718e9-6bf1-4e81-91ce-feea3116fd97","Type":"ContainerStarted","Data":"7d2d974dcace9ef5d176a90c94105bd35e3d4ad0da651f4aaa7607c39b0d4606"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.708391 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cvnrn" event={"ID":"12b42d9a-df65-4a89-8961-1fa7f9b8a14b","Type":"ContainerStarted","Data":"00f26de89736c3a679b65fcf1748b241a924eb949e5a0c11bdda3aee6a834f9f"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.710924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" event={"ID":"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1","Type":"ContainerStarted","Data":"3f59e04dccd452c78797de2145bc837867288a9f60e90051d1bc0cc437705a1b"} Jan 27 14:31:28 crc kubenswrapper[4698]: W0127 14:31:28.737526 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc4ed9d_df5c_4eee_82d1_d8c68e7bb3ff.slice/crio-f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06 WatchSource:0}: Error finding container f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06: Status 404 returned error can't find the container with id f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06 Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.758908 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.759924 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.259907816 +0000 UTC m=+144.936685281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.819253 4698 generic.go:334] "Generic (PLEG): container finished" podID="608093bb-ab9f-47bf-bf66-938266244574" containerID="5d16279b658ddda9b5db7d8308dded7abad46e7eb0365ea25b49453c7f5200cb" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.819587 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" event={"ID":"608093bb-ab9f-47bf-bf66-938266244574","Type":"ContainerDied","Data":"5d16279b658ddda9b5db7d8308dded7abad46e7eb0365ea25b49453c7f5200cb"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.842195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8bg4r" event={"ID":"79cd2a60-54ce-46a5-96cd-53bb078fa804","Type":"ContainerStarted","Data":"588452826b3dbb85c2a6ac5af6d3fd08f1965e0c320a2ccd3f2ccb320117b499"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.849667 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" event={"ID":"7b8fe528-a188-48bd-8555-6dd2798122fe","Type":"ContainerStarted","Data":"cc5f9948f51a2df2cb06084ea84c07f59f6a4303836bcb5ccfa6c5f62fa5ddf3"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.852221 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.860226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.861101 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.361090801 +0000 UTC m=+145.037868266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.866077 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.875284 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" event={"ID":"f4023d55-2b87-419a-be3e-bab987ba0841","Type":"ContainerStarted","Data":"88e216b1407d71e6eaefa84e1bd9729aba7be9ce26bd79aad802e1eab07a1189"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.894954 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" event={"ID":"0a93337a-414f-4a6b-9cdb-4cb56092851a","Type":"ContainerStarted","Data":"48837c935202735aa8d24a9e45a84e7d16fd649cd5b8a0a719c696e467329653"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.898892 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.899976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" event={"ID":"c8964e6d-30b9-4402-b132-105cb5a1695b","Type":"ContainerStarted","Data":"789fe8d9a233baa12ea3c30f508a67bbb920e477a703502073875796a0389bbc"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.900537 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.901258 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" event={"ID":"37564592-b4e8-47fd-8b7f-b1d26254efa0","Type":"ContainerStarted","Data":"e49b2eeb1cad81d03b38386e88f688725e5000ae61419e8f644a3729f3a6ea39"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.902977 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" event={"ID":"ff0c6e82-2e72-4776-b801-2cf427b72696","Type":"ContainerStarted","Data":"49187ac08e0a32843605f2116470cddd868b185f8b3cf0c63105ee7060f2d2f8"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.904488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.905111 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" event={"ID":"88376a00-d5b2-4d08-ae81-097d8134df27","Type":"ContainerStarted","Data":"db7597eca314aa952eb14438e4b26e7511930b296f650704f86e183da4027765"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.908084 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" event={"ID":"f7ee1b04-33fb-452c-917e-ea08b3f489a4","Type":"ContainerStarted","Data":"4a632c25e90ea869a5a480b56520814371c44ddad8f24da7877c1301ace46189"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.912564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" event={"ID":"78829bfe-d678-496f-8bf5-28b5008758f0","Type":"ContainerStarted","Data":"7ef4b91d450f907ebcee16aa3df5a613f5e6af2c4fb620a8bae75e04b7486e41"} Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.914020 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p5fgd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.914054 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.914964 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.922844 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj2hq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.922918 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.945448 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" podStartSLOduration=120.945421186 podStartE2EDuration="2m0.945421186s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:28.907943317 +0000 UTC m=+144.584720782" watchObservedRunningTime="2026-01-27 14:31:28.945421186 +0000 UTC m=+144.622198651" Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.945643 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.964682 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.964998 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.464964637 +0000 UTC m=+145.141742102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.965084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:28 crc kubenswrapper[4698]: E0127 14:31:28.965941 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.465928052 +0000 UTC m=+145.142705567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.973591 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zft44"] Jan 27 14:31:28 crc kubenswrapper[4698]: I0127 14:31:28.974616 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bdrpp" podStartSLOduration=120.974592529 podStartE2EDuration="2m0.974592529s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:28.9543703 +0000 UTC m=+144.631147765" watchObservedRunningTime="2026-01-27 14:31:28.974592529 +0000 UTC m=+144.651370024" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.002615 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8bg4r" podStartSLOduration=121.002594371 podStartE2EDuration="2m1.002594371s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:28.998389251 +0000 UTC m=+144.675166736" watchObservedRunningTime="2026-01-27 14:31:29.002594371 +0000 UTC m=+144.679371836" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.017473 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.017511 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.067088 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.067564 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.567539939 +0000 UTC m=+145.244317404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.067595 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.071612 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.071690 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.071812 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podStartSLOduration=121.071794351 podStartE2EDuration="2m1.071794351s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.062477787 +0000 UTC m=+144.739255262" watchObservedRunningTime="2026-01-27 14:31:29.071794351 +0000 UTC m=+144.748571816" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.112021 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" podStartSLOduration=121.111986341 podStartE2EDuration="2m1.111986341s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.102885653 +0000 UTC m=+144.779663158" watchObservedRunningTime="2026-01-27 14:31:29.111986341 +0000 UTC m=+144.788763806" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.124103 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-msjx7"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.131752 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.145090 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5862"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.147233 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hmtx5" podStartSLOduration=121.147213483 podStartE2EDuration="2m1.147213483s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.143766902 +0000 UTC m=+144.820544367" watchObservedRunningTime="2026-01-27 14:31:29.147213483 +0000 UTC m=+144.823990948" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.168245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.168672 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.668660403 +0000 UTC m=+145.345437868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: W0127 14:31:29.179107 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod912888aa_9826_4be1_a96a_315508a84cf9.slice/crio-5a432ca989c687fffadc93e4840e51aaaa581bf4cf573d3117534ea572a49de8 WatchSource:0}: Error finding container 5a432ca989c687fffadc93e4840e51aaaa581bf4cf573d3117534ea572a49de8: Status 404 returned error can't find the container with id 5a432ca989c687fffadc93e4840e51aaaa581bf4cf573d3117534ea572a49de8 Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.190467 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-h864m" podStartSLOduration=121.190411091 podStartE2EDuration="2m1.190411091s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.189534259 +0000 UTC m=+144.866311724" watchObservedRunningTime="2026-01-27 14:31:29.190411091 +0000 UTC m=+144.867188556" Jan 27 14:31:29 crc kubenswrapper[4698]: W0127 14:31:29.191119 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d47a27_37a3_420c_af6f_e02bcd53ec1a.slice/crio-e6cacb7c7497d4e7a55fdebd62b803e00444d785120378ef42921d1c7bde9efe WatchSource:0}: Error finding container e6cacb7c7497d4e7a55fdebd62b803e00444d785120378ef42921d1c7bde9efe: Status 404 returned error can't find the container with id e6cacb7c7497d4e7a55fdebd62b803e00444d785120378ef42921d1c7bde9efe Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.202747 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hwgzv"] Jan 27 14:31:29 crc kubenswrapper[4698]: W0127 14:31:29.214239 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b795163_b78c_4a56_9181_3243d6684eed.slice/crio-53a3de96a38aa86e4ba3e6d3a4d1e2423521540d13f0532406ae0ed3fe428720 WatchSource:0}: Error finding container 53a3de96a38aa86e4ba3e6d3a4d1e2423521540d13f0532406ae0ed3fe428720: Status 404 returned error can't find the container with id 53a3de96a38aa86e4ba3e6d3a4d1e2423521540d13f0532406ae0ed3fe428720 Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.225172 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d9s2k" podStartSLOduration=121.22515064 podStartE2EDuration="2m1.22515064s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.224873483 +0000 UTC m=+144.901650958" watchObservedRunningTime="2026-01-27 14:31:29.22515064 +0000 UTC m=+144.901928115" Jan 27 14:31:29 crc kubenswrapper[4698]: W0127 14:31:29.242221 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48753b5b_f7f1_468a_982f_c7defe92fdcd.slice/crio-8d56d21f94d9437f798417c7e1ab6cd7bf1d248a8db21222b817534337f9f1f9 WatchSource:0}: Error finding container 8d56d21f94d9437f798417c7e1ab6cd7bf1d248a8db21222b817534337f9f1f9: Status 404 returned error can't find the container with id 8d56d21f94d9437f798417c7e1ab6cd7bf1d248a8db21222b817534337f9f1f9 Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.245776 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh"] Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.273605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.273810 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.773787592 +0000 UTC m=+145.450565057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.273872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.274168 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.774156541 +0000 UTC m=+145.450934016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.375070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.375423 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.875407438 +0000 UTC m=+145.552184903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.477612 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.478036 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:29.978015042 +0000 UTC m=+145.654792577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.578089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.578451 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.078359205 +0000 UTC m=+145.755136670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.578807 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.579131 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.079123445 +0000 UTC m=+145.755900910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.679759 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.679966 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.17991315 +0000 UTC m=+145.856690615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.680447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.680819 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.180808224 +0000 UTC m=+145.857585769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.781473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.781880 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.281861256 +0000 UTC m=+145.958638721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.883166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.883560 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.383539664 +0000 UTC m=+146.060317169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.919350 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zft44" event={"ID":"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8","Type":"ContainerStarted","Data":"c3c2a07ac6aa02c46e2c4dee013b2143c69b7219fcf57701dd91fc5ea98fbc8c"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.920887 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" event={"ID":"f8de0774-06e2-4438-b5ef-ad70f998b22c","Type":"ContainerStarted","Data":"27fd40d638341403a81ca53359fb1258002d0ecf67ade3148dc6e9ae1dbdfe7c"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.923921 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" event={"ID":"3f2c6212-a0a5-4d50-aa1c-63fc63296dab","Type":"ContainerStarted","Data":"410fd2db42aa05844b9b3e205896fe160c40c80c8f1013049aeb770aa50beed4"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.926494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" event={"ID":"019c0321-025d-4bf5-a48c-fd0e707b797c","Type":"ContainerStarted","Data":"7053894fe69006b9d6f8084f87dafdbffc298826f4af339626a4d2f8316909c0"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.930925 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5862" event={"ID":"83d47a27-37a3-420c-af6f-e02bcd53ec1a","Type":"ContainerStarted","Data":"e6cacb7c7497d4e7a55fdebd62b803e00444d785120378ef42921d1c7bde9efe"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.937514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" event={"ID":"0a93337a-414f-4a6b-9cdb-4cb56092851a","Type":"ContainerStarted","Data":"3fc516c8a040940e5e2097ef28a9a3db5543ceabe37c2423c19a426cdeb831bd"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.940780 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5cmj" podStartSLOduration=121.94076198 podStartE2EDuration="2m1.94076198s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.938893732 +0000 UTC m=+145.615671197" watchObservedRunningTime="2026-01-27 14:31:29.94076198 +0000 UTC m=+145.617539445" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.942156 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" event={"ID":"96b9797e-289f-45f3-8707-bc899a687aa1","Type":"ContainerStarted","Data":"2fde2c2c0b4329248a4d28f5fa44b3abadc8f7300c1547297aa79d1ca763560b"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.942195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" event={"ID":"96b9797e-289f-45f3-8707-bc899a687aa1","Type":"ContainerStarted","Data":"208d879805534a3f086cd2284a95056a371f3dca9a81d5796caa5d416e9145f8"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.945297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" event={"ID":"27510e33-967a-4675-b5bf-afd141421399","Type":"ContainerStarted","Data":"cbeae8082de217ea0d176b753d84316b8efc2283db481f59cb3eb3766fafcdee"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.945351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" event={"ID":"27510e33-967a-4675-b5bf-afd141421399","Type":"ContainerStarted","Data":"7d3e299c444fb59808e0720012d80c1b1f4acd5d33afc5d9ccbd87d1a03d677b"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.967177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" event={"ID":"13b718e9-6bf1-4e81-91ce-feea3116fd97","Type":"ContainerStarted","Data":"52b63283b357cc38cf1d71999282e88bcc60e495eb181110ea53d66e9a6a44e5"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.968341 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.969756 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bz9jw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.969793 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podUID="13b718e9-6bf1-4e81-91ce-feea3116fd97" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.971376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" event={"ID":"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff","Type":"ContainerStarted","Data":"16fe7d071419be8091a09eea3ba007ffe94a3b20a3999fe4213f622b0287d995"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.971416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" event={"ID":"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff","Type":"ContainerStarted","Data":"f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.979448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" event={"ID":"97edac57-d351-482c-919b-d12bce71f637","Type":"ContainerStarted","Data":"596040ac4d3579210e518bff9842acf1aa7ce0d13ed76a2e91973fdf0bfcd7ad"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.982989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" event={"ID":"a11f89bd-147f-4b21-b83f-6b86727ecc2e","Type":"ContainerStarted","Data":"91615b548f5b2e0520d25a4f71752350a2c551cbb27fa8de9c76ad3c77f34b7c"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.983963 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:29 crc kubenswrapper[4698]: E0127 14:31:29.984481 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.484460882 +0000 UTC m=+146.161238347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.990121 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podStartSLOduration=121.990098291 podStartE2EDuration="2m1.990098291s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:29.988013846 +0000 UTC m=+145.664791321" watchObservedRunningTime="2026-01-27 14:31:29.990098291 +0000 UTC m=+145.666875756" Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.997302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pwzb6" event={"ID":"682a4abd-5c9f-4b58-8090-9c78f10d3577","Type":"ContainerStarted","Data":"e03d85ef73df90c5194424c50e6a7f83bc3e5a6ad2f8baefb3e2226a4efce803"} Jan 27 14:31:29 crc kubenswrapper[4698]: I0127 14:31:29.999142 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" event={"ID":"5f4a113f-a4c9-423f-8d34-fb05c1f776af","Type":"ContainerStarted","Data":"12da422ed43b2dd1bbb3045f47a9c58c40f420f22da7742adaac4b50992471df"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.000859 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" event={"ID":"b744b41b-da1c-44d2-a538-e1d8bfe5c144","Type":"ContainerStarted","Data":"a9d042615277b01f752faffc07d89fd0152d006a5926ada0a32ae93b624774b5"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.005076 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-msjx7" event={"ID":"8b795163-b78c-4a56-9181-3243d6684eed","Type":"ContainerStarted","Data":"53a3de96a38aa86e4ba3e6d3a4d1e2423521540d13f0532406ae0ed3fe428720"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.005758 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" podStartSLOduration=90.0057424 podStartE2EDuration="1m30.0057424s" podCreationTimestamp="2026-01-27 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.005471872 +0000 UTC m=+145.682249347" watchObservedRunningTime="2026-01-27 14:31:30.0057424 +0000 UTC m=+145.682519865" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.009296 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" event={"ID":"3a66b6ff-8485-46ca-8a12-ca7a75b63596","Type":"ContainerStarted","Data":"e66466f7f0fe7cfc28906f26cac5eefe231e42654bc066a201074ee752881e3e"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.009357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" event={"ID":"3a66b6ff-8485-46ca-8a12-ca7a75b63596","Type":"ContainerStarted","Data":"706a319e4cbb3d02c824b966f459f2b4e8e0048208e26d27cac04326c758a8ee"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.013828 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" event={"ID":"bfd27e63-9504-4961-90d1-8c0056be6f31","Type":"ContainerStarted","Data":"1d099e530aa4ffa885e535adcf95bb0bf36767071261212f70b7d08e9b11c559"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.013875 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" event={"ID":"bfd27e63-9504-4961-90d1-8c0056be6f31","Type":"ContainerStarted","Data":"eb130554c25e01fb45a1283104e1bc8eaf68c748160851a416e061b760197b42"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.024515 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" event={"ID":"f57848ff-da41-4c6a-9586-c57676b73c90","Type":"ContainerStarted","Data":"d2fe6d0c32bd43f3ce4e0473bf5fe6e65fcd88ec0f027ba3fd85acf27abac0ed"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.027064 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjdln" podStartSLOduration=122.027050117 podStartE2EDuration="2m2.027050117s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.025897506 +0000 UTC m=+145.702674971" watchObservedRunningTime="2026-01-27 14:31:30.027050117 +0000 UTC m=+145.703827582" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.028360 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" event={"ID":"ff0c6e82-2e72-4776-b801-2cf427b72696","Type":"ContainerStarted","Data":"1e112e71e3b36153f85d9f9e613bf48f5c459c58b3fc566a3b1e1799ba1d853d"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.029121 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.033909 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pcr8w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.033969 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podUID="ff0c6e82-2e72-4776-b801-2cf427b72696" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.039523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" event={"ID":"27a960e1-9cd1-41a9-ac06-0ac66ecb12f1","Type":"ContainerStarted","Data":"d07476a24a666e1732922cc5cee377f9b83f6384631043a6192e24e94e610ab9"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.050123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" event={"ID":"c8964e6d-30b9-4402-b132-105cb5a1695b","Type":"ContainerStarted","Data":"14e848d33d0f4786239a63d3a84bfd67c1f315319adbe7d8241d81cc42d53c62"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.051095 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p5fgd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.051183 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.053355 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" event={"ID":"7b8fe528-a188-48bd-8555-6dd2798122fe","Type":"ContainerStarted","Data":"03142422c652fb0497335353118450b2a783a518e681681b251c782b5b4aaf09"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.055147 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cvnrn" event={"ID":"12b42d9a-df65-4a89-8961-1fa7f9b8a14b","Type":"ContainerStarted","Data":"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.069408 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:30 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:30 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:30 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.069442 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerStarted","Data":"e6fcac00a750d4b79ea5d32624f8dbc0f8e09cdda4314d3af7ff767400b882ea"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.069473 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.070814 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g4htw" podStartSLOduration=122.07079205 podStartE2EDuration="2m2.07079205s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.069853825 +0000 UTC m=+145.746631290" watchObservedRunningTime="2026-01-27 14:31:30.07079205 +0000 UTC m=+145.747569515" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.071358 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pwzb6" podStartSLOduration=6.071349785 podStartE2EDuration="6.071349785s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.04629226 +0000 UTC m=+145.723069725" watchObservedRunningTime="2026-01-27 14:31:30.071349785 +0000 UTC m=+145.748127250" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.071986 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" event={"ID":"37564592-b4e8-47fd-8b7f-b1d26254efa0","Type":"ContainerStarted","Data":"6651a38482f9a6a72a0abbfa5240c8c892023bfc699d530ef55214da9d6b697e"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.076096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" event={"ID":"78829bfe-d678-496f-8bf5-28b5008758f0","Type":"ContainerStarted","Data":"bc8efbca3233a8022e9aae7f36f4247a48421b9fbc7c28d30c127a019c1da426"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.078122 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4023d55-2b87-419a-be3e-bab987ba0841" containerID="c3277f80a9e3900648f8ceac20946a3f0d47dcb7ab1b35f3f22f9f375b862f74" exitCode=0 Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.078259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" event={"ID":"f4023d55-2b87-419a-be3e-bab987ba0841","Type":"ContainerDied","Data":"c3277f80a9e3900648f8ceac20946a3f0d47dcb7ab1b35f3f22f9f375b862f74"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.080475 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" event={"ID":"557a3dbe-140e-4e30-bad7-f2c7e828d446","Type":"ContainerStarted","Data":"e6103e80363487613d5f6a599ba0a92297b55779bbb4e1b041551b6851611d23"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.081522 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hwgzv" event={"ID":"48753b5b-f7f1-468a-982f-c7defe92fdcd","Type":"ContainerStarted","Data":"8d56d21f94d9437f798417c7e1ab6cd7bf1d248a8db21222b817534337f9f1f9"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.082825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" event={"ID":"88376a00-d5b2-4d08-ae81-097d8134df27","Type":"ContainerStarted","Data":"b09adb869b54f64672edf96dbe9701613187be3aeea5e502384f70e756a55a15"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.085126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.089615 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.589599152 +0000 UTC m=+146.266376617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.093469 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podStartSLOduration=122.093445762 podStartE2EDuration="2m2.093445762s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.093293398 +0000 UTC m=+145.770070863" watchObservedRunningTime="2026-01-27 14:31:30.093445762 +0000 UTC m=+145.770223227" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.094080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" event={"ID":"77a18531-ffc7-42d9-bba7-78d72b032c39","Type":"ContainerStarted","Data":"8a6881c25ac225accf043c706d89cc84a43087f173a4b7369b33abc296c875e6"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.099775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" event={"ID":"912888aa-9826-4be1-a96a-315508a84cf9","Type":"ContainerStarted","Data":"5a432ca989c687fffadc93e4840e51aaaa581bf4cf573d3117534ea572a49de8"} Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.100200 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x7rj5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.100238 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.100541 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.100565 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.105846 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj2hq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.105902 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.124083 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cvnrn" podStartSLOduration=122.124052033 podStartE2EDuration="2m2.124052033s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.111438543 +0000 UTC m=+145.788216038" watchObservedRunningTime="2026-01-27 14:31:30.124052033 +0000 UTC m=+145.800829528" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.166451 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6xmqh" podStartSLOduration=122.1664343 podStartE2EDuration="2m2.1664343s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.166093542 +0000 UTC m=+145.842871027" watchObservedRunningTime="2026-01-27 14:31:30.1664343 +0000 UTC m=+145.843211765" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.187810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.188133 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.688104408 +0000 UTC m=+146.364881873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.188492 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.189675 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.689655648 +0000 UTC m=+146.366433213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.229310 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d7vnt" podStartSLOduration=122.229293434 podStartE2EDuration="2m2.229293434s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.228132865 +0000 UTC m=+145.904910340" watchObservedRunningTime="2026-01-27 14:31:30.229293434 +0000 UTC m=+145.906070899" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.253085 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cfwgg" podStartSLOduration=122.253063386 podStartE2EDuration="2m2.253063386s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.251337901 +0000 UTC m=+145.928115366" watchObservedRunningTime="2026-01-27 14:31:30.253063386 +0000 UTC m=+145.929840851" Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.296444 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.297499 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qpzns" podStartSLOduration=122.297476947 podStartE2EDuration="2m2.297476947s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:30.279538798 +0000 UTC m=+145.956316253" watchObservedRunningTime="2026-01-27 14:31:30.297476947 +0000 UTC m=+145.974254412" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.298028 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.798009681 +0000 UTC m=+146.474787146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.398200 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.398625 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:30.898605671 +0000 UTC m=+146.575383216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.503928 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.504347 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.004322326 +0000 UTC m=+146.681099791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.504596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.505050 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.005041704 +0000 UTC m=+146.681819169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.605719 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.606237 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.10621401 +0000 UTC m=+146.782991485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.707234 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.707592 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.20757672 +0000 UTC m=+146.884354185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.808270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.808446 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.308414476 +0000 UTC m=+146.985191941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.808552 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.808994 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.308983901 +0000 UTC m=+146.985761366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.909697 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.910014 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.409984272 +0000 UTC m=+147.086761737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:30 crc kubenswrapper[4698]: I0127 14:31:30.910179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:30 crc kubenswrapper[4698]: E0127 14:31:30.910568 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.410551107 +0000 UTC m=+147.087328572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.011135 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.011302 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.51126413 +0000 UTC m=+147.188041595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.011770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.012161 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.512144523 +0000 UTC m=+147.188921988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.067090 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:31 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:31 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:31 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.067183 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.107515 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hwgzv" event={"ID":"48753b5b-f7f1-468a-982f-c7defe92fdcd","Type":"ContainerStarted","Data":"4f68d001617d4d6019653fba950550d5da0867e77c8e869a9d9f70d7b457a30c"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.109579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" event={"ID":"a11f89bd-147f-4b21-b83f-6b86727ecc2e","Type":"ContainerStarted","Data":"e66556b898734db5587d2ee64f4e0343836ff1d97bcc40939128ec7e736a9f8f"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.110388 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.112261 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.112777 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qfxjx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.112873 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" podUID="a11f89bd-147f-4b21-b83f-6b86727ecc2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.113120 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.613091373 +0000 UTC m=+147.289868888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.114806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" event={"ID":"557a3dbe-140e-4e30-bad7-f2c7e828d446","Type":"ContainerStarted","Data":"afb5509eab63ac07d031d8c698c0caaac07614521656bba0dee075ad63aa315e"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.117026 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" event={"ID":"b744b41b-da1c-44d2-a538-e1d8bfe5c144","Type":"ContainerStarted","Data":"b29236b7ae0867eee9e4c866842ebcb46c09391976736f45274eb97dc72256f8"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.119312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-msjx7" event={"ID":"8b795163-b78c-4a56-9181-3243d6684eed","Type":"ContainerStarted","Data":"e222a4829535229875c4811c456d12fe0a7c0bd4d7172b89b0b3a69e6dc9fa91"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.119352 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-msjx7" event={"ID":"8b795163-b78c-4a56-9181-3243d6684eed","Type":"ContainerStarted","Data":"6442d35be1abe407185077bc3f52873fe94854381f730967d44c5a73470fc03a"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.119475 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.121336 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" event={"ID":"019c0321-025d-4bf5-a48c-fd0e707b797c","Type":"ContainerStarted","Data":"4d8ea11416db2e70c46853bd6ccd0822108a8470f1c0b18841c33efa5169ff06"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.123049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" event={"ID":"912888aa-9826-4be1-a96a-315508a84cf9","Type":"ContainerStarted","Data":"3ba073e6756238329043e0d5ec3e2215c34ff79dd1c7b09b723dcbf8adef3399"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.123295 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.124375 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-c49dc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.124508 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" podUID="912888aa-9826-4be1-a96a-315508a84cf9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.125967 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" event={"ID":"f4023d55-2b87-419a-be3e-bab987ba0841","Type":"ContainerStarted","Data":"bb4730d6cb9fe944cea0e3e254194011326ee700ec716ef541e70f923f805e5f"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.126196 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.129581 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5862" event={"ID":"83d47a27-37a3-420c-af6f-e02bcd53ec1a","Type":"ContainerStarted","Data":"a5db25175ecbb7cbf2f199288b256768c3c54c9db4955ddaa2c5aad02d379679"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.131780 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" event={"ID":"608093bb-ab9f-47bf-bf66-938266244574","Type":"ContainerStarted","Data":"db1de8662711ea4f89249d81bacb71aafe214425898a890cf534fa976b79ab23"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.134396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" event={"ID":"f57848ff-da41-4c6a-9586-c57676b73c90","Type":"ContainerStarted","Data":"ef4497e947c25fa9466f9686b0992513dff5879b1cdd50d6aff6b41f22b47842"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.137004 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" event={"ID":"bfd27e63-9504-4961-90d1-8c0056be6f31","Type":"ContainerStarted","Data":"aedda222273e4e2992bfd56a01888080dfe8e03e21aaba0fc9789f65aa6b0304"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.140693 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" event={"ID":"27510e33-967a-4675-b5bf-afd141421399","Type":"ContainerStarted","Data":"586f7ab501859377d1a25eca07b5dfede596afeea86c507661350782fb075521"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.142444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" event={"ID":"f8de0774-06e2-4438-b5ef-ad70f998b22c","Type":"ContainerStarted","Data":"54489e8dbed50f1169cfdc769b97dd7907e83ffcbb482649f9809d5cb05a8e61"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.144258 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerStarted","Data":"fd915629ed1ef2e612e6bedd38d370cb4c8f28262640ac77f7b59329afb0378b"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.144485 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.145580 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hwgzv" podStartSLOduration=7.145562462 podStartE2EDuration="7.145562462s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.143737873 +0000 UTC m=+146.820515338" watchObservedRunningTime="2026-01-27 14:31:31.145562462 +0000 UTC m=+146.822339927" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.146765 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwgll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.146821 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.148507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" event={"ID":"97edac57-d351-482c-919b-d12bce71f637","Type":"ContainerStarted","Data":"6a69027765eccb97bd6c50e69afb5c41407b75fd2c64c9c2a1b95ef92fb47d28"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.148566 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" event={"ID":"97edac57-d351-482c-919b-d12bce71f637","Type":"ContainerStarted","Data":"5b7ae440f7563f200b98d2dfe968066c1ae80eeb1fb33678cdd7f864ffb5dff6"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.152948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" event={"ID":"37564592-b4e8-47fd-8b7f-b1d26254efa0","Type":"ContainerStarted","Data":"b3a9a7ce53b0b7125d35e359e06fcda1a955ae64a2eb406c3bd57791286f753f"} Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.153840 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pcr8w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.153911 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podUID="ff0c6e82-2e72-4776-b801-2cf427b72696" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.154022 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bz9jw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.154072 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podUID="13b718e9-6bf1-4e81-91ce-feea3116fd97" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.154325 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.154353 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.162401 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.163351 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.163431 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z5f9l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.163466 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" podUID="f57848ff-da41-4c6a-9586-c57676b73c90" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.171257 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" podStartSLOduration=123.171237573 podStartE2EDuration="2m3.171237573s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.169774254 +0000 UTC m=+146.846551749" watchObservedRunningTime="2026-01-27 14:31:31.171237573 +0000 UTC m=+146.848015038" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.210325 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" podStartSLOduration=123.210298884 podStartE2EDuration="2m3.210298884s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.205363335 +0000 UTC m=+146.882140800" watchObservedRunningTime="2026-01-27 14:31:31.210298884 +0000 UTC m=+146.887076359" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.219369 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.222322 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.722301308 +0000 UTC m=+147.399078773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.250808 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" podStartSLOduration=123.250790653 podStartE2EDuration="2m3.250790653s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.249429747 +0000 UTC m=+146.926207212" watchObservedRunningTime="2026-01-27 14:31:31.250790653 +0000 UTC m=+146.927568118" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.280939 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.317552 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h8hj7" podStartSLOduration=123.317527788 podStartE2EDuration="2m3.317527788s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.27517003 +0000 UTC m=+146.951947495" watchObservedRunningTime="2026-01-27 14:31:31.317527788 +0000 UTC m=+146.994305263" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.318018 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-msjx7" podStartSLOduration=7.31801152 podStartE2EDuration="7.31801152s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.31492484 +0000 UTC m=+146.991702315" watchObservedRunningTime="2026-01-27 14:31:31.31801152 +0000 UTC m=+146.994788985" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.321473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.321623 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.821602914 +0000 UTC m=+147.498380379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.321766 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.322127 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.822108438 +0000 UTC m=+147.498885903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.341671 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcmrv" podStartSLOduration=123.341625108 podStartE2EDuration="2m3.341625108s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.336712379 +0000 UTC m=+147.013489864" watchObservedRunningTime="2026-01-27 14:31:31.341625108 +0000 UTC m=+147.018402573" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.352576 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vrtzk" podStartSLOduration=123.352556994 podStartE2EDuration="2m3.352556994s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.352182013 +0000 UTC m=+147.028959498" watchObservedRunningTime="2026-01-27 14:31:31.352556994 +0000 UTC m=+147.029334459" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.409006 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6kdpx" podStartSLOduration=123.408981579 podStartE2EDuration="2m3.408981579s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.408507717 +0000 UTC m=+147.085285192" watchObservedRunningTime="2026-01-27 14:31:31.408981579 +0000 UTC m=+147.085759044" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.411279 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mswzh" podStartSLOduration=123.411259899 podStartE2EDuration="2m3.411259899s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.36999921 +0000 UTC m=+147.046776675" watchObservedRunningTime="2026-01-27 14:31:31.411259899 +0000 UTC m=+147.088037364" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.423080 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.423415 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:31.923398876 +0000 UTC m=+147.600176341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.446780 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podStartSLOduration=123.446759677 podStartE2EDuration="2m3.446759677s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.443849191 +0000 UTC m=+147.120626666" watchObservedRunningTime="2026-01-27 14:31:31.446759677 +0000 UTC m=+147.123537152" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.493546 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" podStartSLOduration=123.493518659 podStartE2EDuration="2m3.493518659s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.478173558 +0000 UTC m=+147.154951023" watchObservedRunningTime="2026-01-27 14:31:31.493518659 +0000 UTC m=+147.170296124" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.521491 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c5862" podStartSLOduration=123.521453199 podStartE2EDuration="2m3.521453199s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.51994871 +0000 UTC m=+147.196726165" watchObservedRunningTime="2026-01-27 14:31:31.521453199 +0000 UTC m=+147.198230684" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.521609 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qw9xb" podStartSLOduration=123.521602243 podStartE2EDuration="2m3.521602243s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.496174408 +0000 UTC m=+147.172951973" watchObservedRunningTime="2026-01-27 14:31:31.521602243 +0000 UTC m=+147.198379718" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.524127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.524560 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.02454106 +0000 UTC m=+147.701318605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.607756 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" podStartSLOduration=123.607738266 podStartE2EDuration="2m3.607738266s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.562217145 +0000 UTC m=+147.238994610" watchObservedRunningTime="2026-01-27 14:31:31.607738266 +0000 UTC m=+147.284515731" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.624844 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.625034 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.125004187 +0000 UTC m=+147.801781662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.625267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.625653 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.125619423 +0000 UTC m=+147.802396888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.648739 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dhnh8" podStartSLOduration=123.648701727 podStartE2EDuration="2m3.648701727s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.609552943 +0000 UTC m=+147.286330408" watchObservedRunningTime="2026-01-27 14:31:31.648701727 +0000 UTC m=+147.325479192" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.649230 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b8t54" podStartSLOduration=123.64922163 podStartE2EDuration="2m3.64922163s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.64842717 +0000 UTC m=+147.325204635" watchObservedRunningTime="2026-01-27 14:31:31.64922163 +0000 UTC m=+147.325999095" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.680601 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qb472" podStartSLOduration=123.68058026 podStartE2EDuration="2m3.68058026s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.679511982 +0000 UTC m=+147.356289437" watchObservedRunningTime="2026-01-27 14:31:31.68058026 +0000 UTC m=+147.357357725" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.708512 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnfnv" podStartSLOduration=123.70849305 podStartE2EDuration="2m3.70849305s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.707277398 +0000 UTC m=+147.384054893" watchObservedRunningTime="2026-01-27 14:31:31.70849305 +0000 UTC m=+147.385270505" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.726459 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.727664 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.22762271 +0000 UTC m=+147.904400185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.742981 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podStartSLOduration=123.742964211 podStartE2EDuration="2m3.742964211s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:31.738108884 +0000 UTC m=+147.414886349" watchObservedRunningTime="2026-01-27 14:31:31.742964211 +0000 UTC m=+147.419741676" Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.828456 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.828906 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.328888898 +0000 UTC m=+148.005666363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.929986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.930215 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.430178466 +0000 UTC m=+148.106955951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:31 crc kubenswrapper[4698]: I0127 14:31:31.930329 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:31 crc kubenswrapper[4698]: E0127 14:31:31.930986 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.430975157 +0000 UTC m=+148.107752622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.031676 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.031930 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.531892646 +0000 UTC m=+148.208670121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.032248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.032701 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.532687116 +0000 UTC m=+148.209464661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.068409 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:32 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:32 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:32 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.068492 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.133541 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.133742 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.633715318 +0000 UTC m=+148.310492783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.133940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.134245 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.634237982 +0000 UTC m=+148.311015447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.159769 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zft44" event={"ID":"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8","Type":"ContainerStarted","Data":"6fb6a7d1e461415f98fd7b93c4ac87003293149d67f6a2aaef074dd5cb662756"} Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.160298 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qfxjx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.160332 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" podUID="a11f89bd-147f-4b21-b83f-6b86727ecc2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.160546 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bz9jw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.160603 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podUID="13b718e9-6bf1-4e81-91ce-feea3116fd97" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.161217 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-c49dc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.161240 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" podUID="912888aa-9826-4be1-a96a-315508a84cf9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.161451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.161469 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwgll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.161487 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.234708 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.234919 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.734892454 +0000 UTC m=+148.411669929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.235046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.235440 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.735423517 +0000 UTC m=+148.412201072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.314990 4698 csr.go:261] certificate signing request csr-cjq5l is approved, waiting to be issued Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.335130 4698 csr.go:257] certificate signing request csr-cjq5l is issued Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.335671 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.335808 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.835782371 +0000 UTC m=+148.512559836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.352481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.355550 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.855525777 +0000 UTC m=+148.532303242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.455436 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.456127 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:32.956104767 +0000 UTC m=+148.632882232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.556963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.557368 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.057347624 +0000 UTC m=+148.734125099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.657713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.657899 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.157874313 +0000 UTC m=+148.834651778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.658145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.658576 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.15856412 +0000 UTC m=+148.835341585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.758958 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.759539 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.259220282 +0000 UTC m=+148.935997757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.759791 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.760301 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.26028879 +0000 UTC m=+148.937066265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.860454 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.860893 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.36086718 +0000 UTC m=+149.037644635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.861099 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.861476 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.361468576 +0000 UTC m=+149.038246041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962097 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.962303 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.462272811 +0000 UTC m=+149.139050276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.962551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:32 crc kubenswrapper[4698]: E0127 14:31:32.963052 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.463043672 +0000 UTC m=+149.139821137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.971577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.971853 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.972446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:32 crc kubenswrapper[4698]: I0127 14:31:32.972797 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.010290 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.018673 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.063810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.063949 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.56393215 +0000 UTC m=+149.240709615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.064291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.064586 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.564574986 +0000 UTC m=+149.241352451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.072070 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:33 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:33 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:33 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.072120 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.114336 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.160914 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pcr8w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.160979 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podUID="ff0c6e82-2e72-4776-b801-2cf427b72696" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.164916 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.165161 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.665126325 +0000 UTC m=+149.341903800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.165350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.165735 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.665722401 +0000 UTC m=+149.342499916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.253063 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.253128 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.253491 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.253512 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.266180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.266560 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.766540567 +0000 UTC m=+149.443318042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.336174 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 14:26:32 +0000 UTC, rotation deadline is 2026-11-25 04:21:02.3914547 +0000 UTC Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.336215 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7237h49m29.055243112s for next certificate rotation Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.368448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.370156 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.870137846 +0000 UTC m=+149.546915361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.474675 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.475046 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:33.975022867 +0000 UTC m=+149.651800332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.579836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.580203 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.080187058 +0000 UTC m=+149.756964523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.681095 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.681208 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.181189069 +0000 UTC m=+149.857966534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.681350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.681652 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.18162996 +0000 UTC m=+149.858407415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.781988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.782189 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.282162059 +0000 UTC m=+149.958939534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.782227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.782541 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.282528198 +0000 UTC m=+149.959305663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: W0127 14:31:33.846589 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-122201ce77261f2d28114f83616ae88bdc2354c9da841a151f6dd36870f67261 WatchSource:0}: Error finding container 122201ce77261f2d28114f83616ae88bdc2354c9da841a151f6dd36870f67261: Status 404 returned error can't find the container with id 122201ce77261f2d28114f83616ae88bdc2354c9da841a151f6dd36870f67261 Jan 27 14:31:33 crc kubenswrapper[4698]: W0127 14:31:33.876508 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0ffd946075b35ab5317ccf832eb47f365e75057cea05b6838e5b268c1ddaf055 WatchSource:0}: Error finding container 0ffd946075b35ab5317ccf832eb47f365e75057cea05b6838e5b268c1ddaf055: Status 404 returned error can't find the container with id 0ffd946075b35ab5317ccf832eb47f365e75057cea05b6838e5b268c1ddaf055 Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.883103 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.883227 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.3831932 +0000 UTC m=+150.059970675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.883509 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.884011 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.383992081 +0000 UTC m=+150.060769536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: W0127 14:31:33.927330 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-045b133b734a7a1ae3c5e0efda34615f87bbb0eedcf878e98b30ce31c29ce659 WatchSource:0}: Error finding container 045b133b734a7a1ae3c5e0efda34615f87bbb0eedcf878e98b30ce31c29ce659: Status 404 returned error can't find the container with id 045b133b734a7a1ae3c5e0efda34615f87bbb0eedcf878e98b30ce31c29ce659 Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.984107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.984454 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.484422876 +0000 UTC m=+150.161200341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:33 crc kubenswrapper[4698]: I0127 14:31:33.984857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:33 crc kubenswrapper[4698]: E0127 14:31:33.985242 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.485231848 +0000 UTC m=+150.162009313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.069120 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:34 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:34 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:34 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.069659 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.086277 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.086715 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.58668778 +0000 UTC m=+150.263465245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.118435 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.119654 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.121682 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.122563 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.134064 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.181910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"122201ce77261f2d28114f83616ae88bdc2354c9da841a151f6dd36870f67261"} Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.183292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0ffd946075b35ab5317ccf832eb47f365e75057cea05b6838e5b268c1ddaf055"} Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.184530 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"045b133b734a7a1ae3c5e0efda34615f87bbb0eedcf878e98b30ce31c29ce659"} Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.188373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.188458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.188490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.188832 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.688819001 +0000 UTC m=+150.365596466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.289630 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.290273 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.790252173 +0000 UTC m=+150.467029638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.290423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.290615 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.290821 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.290988 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.291339 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.791327802 +0000 UTC m=+150.468105267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.330692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.392218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.392379 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.892358693 +0000 UTC m=+150.569136158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.392484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.392912 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.892902017 +0000 UTC m=+150.569679482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.416709 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.417176 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" containerID="cri-o://333c74fc68d1eb86a37b571c39e30f2380edb16ccb8cb54acf8336f12fc0f43e" gracePeriod=30 Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.439024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.483578 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj2hq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": read tcp 10.217.0.2:39706->10.217.0.7:8443: read: connection reset by peer" start-of-body= Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.483884 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": read tcp 10.217.0.2:39706->10.217.0.7:8443: read: connection reset by peer" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.493698 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.494020 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:34.99400528 +0000 UTC m=+150.670782745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.595879 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.596425 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.096382097 +0000 UTC m=+150.773159562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.697140 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.697403 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.197386829 +0000 UTC m=+150.874164294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.798048 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.798395 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.298380699 +0000 UTC m=+150.975158164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.804758 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.806080 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.807791 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.815227 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.892851 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.900109 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.900265 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.400243132 +0000 UTC m=+151.077020607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.900418 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.900482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.900512 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2cm\" (UniqueName: \"kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:34 crc kubenswrapper[4698]: E0127 14:31:34.900942 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.40093034 +0000 UTC m=+151.077707805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:34 crc kubenswrapper[4698]: I0127 14:31:34.900597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.001579 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.001767 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.501736226 +0000 UTC m=+151.178513691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.001862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.001918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.001947 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.001967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz2cm\" (UniqueName: \"kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.002393 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.502381382 +0000 UTC m=+151.179158847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.002442 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.002556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.016993 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.018199 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.020518 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.032523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz2cm\" (UniqueName: \"kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm\") pod \"certified-operators-dhlmg\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.042566 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.064956 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:35 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:35 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:35 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.065541 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.103820 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.104181 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.104259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.104284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq9l\" (UniqueName: \"kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.104433 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.60440942 +0000 UTC m=+151.281186885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.127570 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.202095 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.203508 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.205593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.205724 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.205831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.205905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jq9l\" (UniqueName: \"kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.206130 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.206390 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.706375496 +0000 UTC m=+151.383152961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.206444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.221147 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.246686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jq9l\" (UniqueName: \"kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l\") pod \"community-operators-9t9sp\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.307435 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.307666 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.807617333 +0000 UTC m=+151.484394808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.307824 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.307859 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.307891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtfm9\" (UniqueName: \"kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.307937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.308294 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.808278571 +0000 UTC m=+151.485056046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.332613 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.403283 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.405402 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.409456 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.409862 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.909840296 +0000 UTC m=+151.586617771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410036 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqjs\" (UniqueName: \"kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410276 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtfm9\" (UniqueName: \"kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.410758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.411429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.411578 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:35.911565772 +0000 UTC m=+151.588343247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.411651 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.415538 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.431312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtfm9\" (UniqueName: \"kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9\") pod \"certified-operators-cdp6k\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.512129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.512361 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.012340996 +0000 UTC m=+151.689118471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.512816 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.512933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.513049 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqjs\" (UniqueName: \"kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.513122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.513571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.513891 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.013883077 +0000 UTC m=+151.690660542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.514287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.524842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.530349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqjs\" (UniqueName: \"kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs\") pod \"community-operators-qdgff\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.614086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.614243 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.114215079 +0000 UTC m=+151.790992544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.614824 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.615257 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.115240696 +0000 UTC m=+151.792018161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.717186 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.717855 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.217835899 +0000 UTC m=+151.894613364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.722287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.819106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.819627 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.31961583 +0000 UTC m=+151.996393295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.872974 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj2hq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.873050 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.920667 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.920956 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.420940199 +0000 UTC m=+152.097717664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.921383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:35 crc kubenswrapper[4698]: E0127 14:31:35.921707 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.421699699 +0000 UTC m=+152.098477164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.963103 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.963224 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:35 crc kubenswrapper[4698]: I0127 14:31:35.968936 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.023022 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.023558 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.523524952 +0000 UTC m=+152.200302437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.064176 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:36 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:36 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:36 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.064543 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.124323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.125011 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.624996034 +0000 UTC m=+152.301773499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.156120 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z5f9l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.156382 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" podUID="f57848ff-da41-4c6a-9586-c57676b73c90" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.208460 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qtjcb" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.225484 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.225859 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.725838041 +0000 UTC m=+152.402615506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.328037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.328390 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.828375832 +0000 UTC m=+152.505153297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.429109 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.429846 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:36.929830025 +0000 UTC m=+152.606607490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.430298 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.430342 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.430298 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.430454 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.536811 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.537266 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.037248474 +0000 UTC m=+152.714025939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.638584 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.638924 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.138892161 +0000 UTC m=+152.815669636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.639404 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.639812 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.139800295 +0000 UTC m=+152.816577760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.741415 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.241399711 +0000 UTC m=+152.918177176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.741334 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.742136 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.742629 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.242619323 +0000 UTC m=+152.919396788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.843228 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.843657 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.343617573 +0000 UTC m=+153.020395038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:36 crc kubenswrapper[4698]: I0127 14:31:36.945365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:36 crc kubenswrapper[4698]: E0127 14:31:36.946021 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.446006991 +0000 UTC m=+153.122784456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.004693 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.004978 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.006619 4698 patch_prober.go:28] interesting pod/console-f9d7485db-cvnrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.006713 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cvnrn" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.010207 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.011530 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.017289 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.029182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.046431 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.046829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.046939 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.046977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dd6q\" (UniqueName: \"kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.047358 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.54733255 +0000 UTC m=+153.224110015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.061090 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.064572 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:37 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:37 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:37 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.064624 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.147988 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.148229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dd6q\" (UniqueName: \"kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.148399 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.148600 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.148814 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.149350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.149685 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.649672726 +0000 UTC m=+153.326450191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.168710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dd6q\" (UniqueName: \"kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q\") pod \"redhat-marketplace-gxkvv\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.208417 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.209071 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.212170 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.212841 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.219666 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.242123 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.242180 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.242398 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.242414 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.251187 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.251362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.251432 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.251756 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.751737685 +0000 UTC m=+153.428515140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.328334 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.353083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.353150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.353175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.353549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.354201 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.854178303 +0000 UTC m=+153.530955858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.373216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.390436 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c49dc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.404848 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-462jr"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.406381 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.419488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-462jr"] Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.425317 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwgll container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.425389 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.425607 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwgll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.425824 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.454576 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.454842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.454912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.454954 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrklc\" (UniqueName: \"kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.455682 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:37.955615985 +0000 UTC m=+153.632393440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.483314 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x7rj5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.483391 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.528473 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.556612 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.556944 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.557046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrklc\" (UniqueName: \"kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.557158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.557191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.557396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.557571 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.05755149 +0000 UTC m=+153.734329005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.584476 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrklc\" (UniqueName: \"kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc\") pod \"redhat-marketplace-462jr\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.658507 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.658704 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.158672295 +0000 UTC m=+153.835449760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.658900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.659241 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.159233679 +0000 UTC m=+153.836011144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.726902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.765607 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.765816 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.265778635 +0000 UTC m=+153.942556110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.765991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.766343 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.266327299 +0000 UTC m=+153.943104754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.866975 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.867162 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.367144575 +0000 UTC m=+154.043922040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.867398 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.867779 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.367771262 +0000 UTC m=+154.044548727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.968291 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.968431 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.468404773 +0000 UTC m=+154.145182238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:37 crc kubenswrapper[4698]: I0127 14:31:37.968595 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:37 crc kubenswrapper[4698]: E0127 14:31:37.968945 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.468937417 +0000 UTC m=+154.145714882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.010167 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.011216 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.012905 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bz9jw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.012976 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podUID="13b718e9-6bf1-4e81-91ce-feea3116fd97" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.012922 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-bz9jw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.013166 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" podUID="13b718e9-6bf1-4e81-91ce-feea3116fd97" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.015254 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.016554 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.063594 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:38 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:38 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:38 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.063727 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.070172 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.070354 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.570325408 +0000 UTC m=+154.247102873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.070425 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmm9n\" (UniqueName: \"kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.070565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.070691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.070733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.071131 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.571116428 +0000 UTC m=+154.247893913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172027 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.172245 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.672211192 +0000 UTC m=+154.348988657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172526 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmm9n\" (UniqueName: \"kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.172899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.173049 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.673035724 +0000 UTC m=+154.349813179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.173070 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.189142 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmm9n\" (UniqueName: \"kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n\") pod \"redhat-operators-9m8xd\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.273677 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.273883 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.77385045 +0000 UTC m=+154.450627925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.274762 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.275473 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.775466062 +0000 UTC m=+154.452243527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.329955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.369003 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pcr8w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.369039 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pcr8w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.369072 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podUID="ff0c6e82-2e72-4776-b801-2cf427b72696" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.369072 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" podUID="ff0c6e82-2e72-4776-b801-2cf427b72696" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.375923 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.376080 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.876055432 +0000 UTC m=+154.552832897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.376203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.376544 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.876535394 +0000 UTC m=+154.553312939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.379182 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qfxjx container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.379319 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" podUID="a11f89bd-147f-4b21-b83f-6b86727ecc2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.379246 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qfxjx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.379480 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" podUID="a11f89bd-147f-4b21-b83f-6b86727ecc2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.399671 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.400624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.412204 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.477043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.477338 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.977303049 +0000 UTC m=+154.654080514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.477411 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.477467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.477506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.477671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hbd\" (UniqueName: \"kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.477809 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:38.977792342 +0000 UTC m=+154.654569887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.579444 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.579697 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.079675506 +0000 UTC m=+154.756452971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.579748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.579817 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26hbd\" (UniqueName: \"kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.579882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.579928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.580260 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.080248581 +0000 UTC m=+154.757026056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.580316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.580457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.681139 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.681327 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.181299733 +0000 UTC m=+154.858077198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.681631 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.681927 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.181918779 +0000 UTC m=+154.858696244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.783047 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.783217 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.283191857 +0000 UTC m=+154.959969322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.783704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.784039 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.284030789 +0000 UTC m=+154.960808254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.884854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.885060 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.38502943 +0000 UTC m=+155.061806905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.885312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.885677 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.385666907 +0000 UTC m=+155.062444412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.986788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.987089 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.487069858 +0000 UTC m=+155.163847323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:38 crc kubenswrapper[4698]: I0127 14:31:38.987194 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:38 crc kubenswrapper[4698]: E0127 14:31:38.987519 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.48751025 +0000 UTC m=+155.164287715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.064414 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:39 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:39 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:39 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.064487 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.088120 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.088342 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.588313756 +0000 UTC m=+155.265091221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.088443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.088846 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.588837879 +0000 UTC m=+155.265615414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.189878 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.190177 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.689945682 +0000 UTC m=+155.366723157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.190369 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.190837 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.690819515 +0000 UTC m=+155.367596980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.291397 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.291593 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.79155878 +0000 UTC m=+155.468336245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.291912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.292261 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.792248607 +0000 UTC m=+155.469026072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.392752 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.392934 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.89290846 +0000 UTC m=+155.569685925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.393082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.393385 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.893374501 +0000 UTC m=+155.570151966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.494322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.495157 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:39.995125792 +0000 UTC m=+155.671903257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.596019 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.597266 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.097236872 +0000 UTC m=+155.774014407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.697504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.697685 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.197659629 +0000 UTC m=+155.874437094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.697906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.698305 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.198294005 +0000 UTC m=+155.875071480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.799108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.799431 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.299411489 +0000 UTC m=+155.976188964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.799590 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.800588 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.300576929 +0000 UTC m=+155.977354394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:39 crc kubenswrapper[4698]: I0127 14:31:39.901831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:39 crc kubenswrapper[4698]: E0127 14:31:39.902210 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.402189976 +0000 UTC m=+156.078967441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.003709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.004057 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.50404152 +0000 UTC m=+156.180818985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.064345 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:40 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:40 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:40 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.064411 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.104384 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.105027 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.60500602 +0000 UTC m=+156.281783505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.206606 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.207069 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.707049268 +0000 UTC m=+156.383826763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.241870 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.241908 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.242412 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.242287 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.242535 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.243320 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"bb4730d6cb9fe944cea0e3e254194011326ee700ec716ef541e70f923f805e5f"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.243543 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" containerID="cri-o://bb4730d6cb9fe944cea0e3e254194011326ee700ec716ef541e70f923f805e5f" gracePeriod=30 Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.308102 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.308322 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.808290605 +0000 UTC m=+156.485068070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.308423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.308813 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.808804168 +0000 UTC m=+156.485581633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.409553 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.409777 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.909725346 +0000 UTC m=+156.586502821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.410089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.410483 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:40.910469566 +0000 UTC m=+156.587247061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.510890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.511088 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.011059046 +0000 UTC m=+156.687836511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.511273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.512115 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.012097393 +0000 UTC m=+156.688874858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.612499 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.612675 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.112652512 +0000 UTC m=+156.789429987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.612804 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.613121 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.113114565 +0000 UTC m=+156.789892040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.714186 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.714441 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.214405213 +0000 UTC m=+156.891182688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.714571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.714937 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.214921816 +0000 UTC m=+156.891699291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.815964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.816164 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.316132213 +0000 UTC m=+156.992909708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.816222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.816523 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.316510512 +0000 UTC m=+156.993287977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.917614 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.917805 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.41777808 +0000 UTC m=+157.094555545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:40 crc kubenswrapper[4698]: I0127 14:31:40.917949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:40 crc kubenswrapper[4698]: E0127 14:31:40.918329 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.418319765 +0000 UTC m=+157.095097230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.019484 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.019711 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.519680294 +0000 UTC m=+157.196457769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.019891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.020264 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.520252579 +0000 UTC m=+157.197030124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.022408 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.062975 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:41 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:41 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:41 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.063050 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.121287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.121454 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.621423495 +0000 UTC m=+157.298200970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.121743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.122200 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.622170554 +0000 UTC m=+157.298948009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.155990 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z5f9l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.156074 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" podUID="f57848ff-da41-4c6a-9586-c57676b73c90" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.223523 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.223736 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.723704219 +0000 UTC m=+157.400481684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.223845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.224259 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.724248453 +0000 UTC m=+157.401026028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.243486 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.243624 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.324911 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.325070 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.825045369 +0000 UTC m=+157.501822834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.325212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.325693 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.825681336 +0000 UTC m=+157.502458841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.426833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.427104 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.927073456 +0000 UTC m=+157.603850921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.427385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.427811 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:41.927792875 +0000 UTC m=+157.604570360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.528839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.529057 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.029032062 +0000 UTC m=+157.705809527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.529290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.529716 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.029699569 +0000 UTC m=+157.706477044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.631105 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.631336 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.131309616 +0000 UTC m=+157.808087081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.631445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.631954 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.131890491 +0000 UTC m=+157.808668026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.732472 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.732663 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.232624485 +0000 UTC m=+157.909401950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.732768 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.733099 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.233092338 +0000 UTC m=+157.909869803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.834294 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.834447 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.334413917 +0000 UTC m=+158.011191382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.834761 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.835328 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.33530044 +0000 UTC m=+158.012077935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.935880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.936099 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.436066644 +0000 UTC m=+158.112844119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:41 crc kubenswrapper[4698]: I0127 14:31:41.936196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:41 crc kubenswrapper[4698]: E0127 14:31:41.936502 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.436490456 +0000 UTC m=+158.113267921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.037238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.037448 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.537416804 +0000 UTC m=+158.214194270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.037496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.037848 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.537838825 +0000 UTC m=+158.214616290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.063910 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:42 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:42 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:42 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.064048 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.138537 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.138706 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.638676612 +0000 UTC m=+158.315454077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.138800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.139190 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.639179276 +0000 UTC m=+158.315956791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.239630 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.239967 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.73995128 +0000 UTC m=+158.416728745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.341588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.341913 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.841901155 +0000 UTC m=+158.518678620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.442807 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.443021 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.942979069 +0000 UTC m=+158.619756564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.443203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.443618 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:42.943601025 +0000 UTC m=+158.620378530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.497216 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-msjx7" Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.544261 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.544716 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.044695948 +0000 UTC m=+158.721473413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.646260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.647258 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.147242939 +0000 UTC m=+158.824020404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.747811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.748214 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.248199029 +0000 UTC m=+158.924976494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.849306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.849780 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.349767504 +0000 UTC m=+159.026544969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.942907 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-cj2hq_3d3d75e2-1fec-4458-9cb7-3472250b0b49/controller-manager/0.log" Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.943000 4698 generic.go:334] "Generic (PLEG): container finished" podID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerID="333c74fc68d1eb86a37b571c39e30f2380edb16ccb8cb54acf8336f12fc0f43e" exitCode=-1 Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.943033 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" event={"ID":"3d3d75e2-1fec-4458-9cb7-3472250b0b49","Type":"ContainerDied","Data":"333c74fc68d1eb86a37b571c39e30f2380edb16ccb8cb54acf8336f12fc0f43e"} Jan 27 14:31:42 crc kubenswrapper[4698]: I0127 14:31:42.950711 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:42 crc kubenswrapper[4698]: E0127 14:31:42.951025 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.451011531 +0000 UTC m=+159.127788996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.052467 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.052885 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.552874015 +0000 UTC m=+159.229651480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.063276 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:43 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:43 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:43 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.063316 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.153487 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.153977 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.653921766 +0000 UTC m=+159.330699231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.241903 4698 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m8slw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.241982 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" podUID="f4023d55-2b87-419a-be3e-bab987ba0841" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.254687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.255128 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.755109103 +0000 UTC m=+159.431886568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.324907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26hbd\" (UniqueName: \"kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd\") pod \"redhat-operators-mkhfh\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:43 crc kubenswrapper[4698]: W0127 14:31:43.333874 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9a086c8c_4fad_47a3_a463_47ccd03a65fe.slice/crio-d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037 WatchSource:0}: Error finding container d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037: Status 404 returned error can't find the container with id d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037 Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.358244 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.358320 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.85830092 +0000 UTC m=+159.535078385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.358671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.359015 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.859005209 +0000 UTC m=+159.535782674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.459788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.460002 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.959970619 +0000 UTC m=+159.636748084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.460125 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.460708 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:43.960695017 +0000 UTC m=+159.637472532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.514167 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.560912 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.561135 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.061102623 +0000 UTC m=+159.737880098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.561280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.561714 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.061701709 +0000 UTC m=+159.738479174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.664393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.664974 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.164959138 +0000 UTC m=+159.841736603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.771331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.771953 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.271935246 +0000 UTC m=+159.948712701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.878710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.878896 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.378865231 +0000 UTC m=+160.055642696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.879047 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.879377 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.379368184 +0000 UTC m=+160.056145649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.973691 4698 generic.go:334] "Generic (PLEG): container finished" podID="4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" containerID="16fe7d071419be8091a09eea3ba007ffe94a3b20a3999fe4213f622b0287d995" exitCode=0 Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.973794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" event={"ID":"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff","Type":"ContainerDied","Data":"16fe7d071419be8091a09eea3ba007ffe94a3b20a3999fe4213f622b0287d995"} Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.976497 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9a086c8c-4fad-47a3-a463-47ccd03a65fe","Type":"ContainerStarted","Data":"d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037"} Jan 27 14:31:43 crc kubenswrapper[4698]: I0127 14:31:43.980842 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:43 crc kubenswrapper[4698]: E0127 14:31:43.981216 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.481196687 +0000 UTC m=+160.157974152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.084928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.085502 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.585489864 +0000 UTC m=+160.262267329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.102188 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:44 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:44 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:44 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.102238 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.187086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.187241 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.687218393 +0000 UTC m=+160.363995878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.187495 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.187947 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.687937233 +0000 UTC m=+160.364714698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.288687 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.289057 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.789041196 +0000 UTC m=+160.465818661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.289571 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.391097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.391754 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.891737151 +0000 UTC m=+160.568514616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.492518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.492793 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.992772333 +0000 UTC m=+160.669549818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.493122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.493436 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:44.99342956 +0000 UTC m=+160.670207025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.527597 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.569178 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.569437 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.569451 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.569563 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" containerName="controller-manager" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.570046 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.582996 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.594378 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca\") pod \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles\") pod \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert\") pod \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597546 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config\") pod \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597701 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597808 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtwz\" (UniqueName: \"kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz\") pod \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\" (UID: \"3d3d75e2-1fec-4458-9cb7-3472250b0b49\") " Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.597948 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d3d75e2-1fec-4458-9cb7-3472250b0b49" (UID: "3d3d75e2-1fec-4458-9cb7-3472250b0b49"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.598197 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.598269 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.098252621 +0000 UTC m=+160.775030086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.598916 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config" (OuterVolumeSpecName: "config") pod "3d3d75e2-1fec-4458-9cb7-3472250b0b49" (UID: "3d3d75e2-1fec-4458-9cb7-3472250b0b49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.605835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d3d75e2-1fec-4458-9cb7-3472250b0b49" (UID: "3d3d75e2-1fec-4458-9cb7-3472250b0b49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.615817 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz" (OuterVolumeSpecName: "kube-api-access-vgtwz") pod "3d3d75e2-1fec-4458-9cb7-3472250b0b49" (UID: "3d3d75e2-1fec-4458-9cb7-3472250b0b49"). InnerVolumeSpecName "kube-api-access-vgtwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.615894 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.621151 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.623599 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:31:44 crc kubenswrapper[4698]: W0127 14:31:44.624285 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47fa643_2257_49e0_8b1e_77f9d3165c0e.slice/crio-ca952f4728f499ff42a424a345f0de50948fa25564540f8579e898b99ea1e08a WatchSource:0}: Error finding container ca952f4728f499ff42a424a345f0de50948fa25564540f8579e898b99ea1e08a: Status 404 returned error can't find the container with id ca952f4728f499ff42a424a345f0de50948fa25564540f8579e898b99ea1e08a Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.638206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d3d75e2-1fec-4458-9cb7-3472250b0b49" (UID: "3d3d75e2-1fec-4458-9cb7-3472250b0b49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.639832 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:31:44 crc kubenswrapper[4698]: W0127 14:31:44.660952 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f32c526_aea0_4758_a1ea_d0a694af3573.slice/crio-bff9263da872d20e64c2650a7b19e969f30abbcecedfe96f76d5115af20afdaa WatchSource:0}: Error finding container bff9263da872d20e64c2650a7b19e969f30abbcecedfe96f76d5115af20afdaa: Status 404 returned error can't find the container with id bff9263da872d20e64c2650a7b19e969f30abbcecedfe96f76d5115af20afdaa Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.694299 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.697358 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-462jr"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.699835 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.699906 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.699923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.699953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.699990 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkk7\" (UniqueName: \"kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.700008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.700046 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.700056 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d3d75e2-1fec-4458-9cb7-3472250b0b49-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.700064 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3d75e2-1fec-4458-9cb7-3472250b0b49-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.700073 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtwz\" (UniqueName: \"kubernetes.io/projected/3d3d75e2-1fec-4458-9cb7-3472250b0b49-kube-api-access-vgtwz\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.700356 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.20034384 +0000 UTC m=+160.877121305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.719656 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.733216 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:31:44 crc kubenswrapper[4698]: W0127 14:31:44.742964 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882a9575_2eeb_4f8e_812c_2419b499a07e.slice/crio-0f0e681c25b309ae8416547b922f312d209da69dc82d95f1000e49600f7278ca WatchSource:0}: Error finding container 0f0e681c25b309ae8416547b922f312d209da69dc82d95f1000e49600f7278ca: Status 404 returned error can't find the container with id 0f0e681c25b309ae8416547b922f312d209da69dc82d95f1000e49600f7278ca Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.801329 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.802038 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.302004278 +0000 UTC m=+160.978781743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802123 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtkk7\" (UniqueName: \"kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.802270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.802655 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.302617234 +0000 UTC m=+160.979394769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.803631 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.806829 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.808167 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.809563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.820742 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtkk7\" (UniqueName: \"kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7\") pod \"controller-manager-879f6c89f-xstml\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.911144 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:44 crc kubenswrapper[4698]: E0127 14:31:44.911723 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.411705397 +0000 UTC m=+161.088482862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.911844 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.987951 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerStarted","Data":"0c329e85207520291142714c82a2d50fdfb9d97a3b151c7a5f7de6b2145e6bfb"} Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.988000 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerStarted","Data":"bff9263da872d20e64c2650a7b19e969f30abbcecedfe96f76d5115af20afdaa"} Jan 27 14:31:44 crc kubenswrapper[4698]: I0127 14:31:44.989657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerStarted","Data":"0f0e681c25b309ae8416547b922f312d209da69dc82d95f1000e49600f7278ca"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.013459 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.013711 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.014401 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.514378141 +0000 UTC m=+161.191155606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.020259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" event={"ID":"3d3d75e2-1fec-4458-9cb7-3472250b0b49","Type":"ContainerDied","Data":"6a552a249acc3a977702c02a05385405da38caf9193983b37be9595f8841853e"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.020311 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerStarted","Data":"3fd1bd61792ae8f77e9d0314652d0aa159e9fe324a87fa2b68c3ea00a37810bf"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.020334 4698 scope.go:117] "RemoveContainer" containerID="333c74fc68d1eb86a37b571c39e30f2380edb16ccb8cb54acf8336f12fc0f43e" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.033974 4698 generic.go:334] "Generic (PLEG): container finished" podID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerID="83f40dcbc2a7a8786092ebfb13494b24c6155f2cbc2f5a3d748b31ff118c9308" exitCode=0 Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.034037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerDied","Data":"83f40dcbc2a7a8786092ebfb13494b24c6155f2cbc2f5a3d748b31ff118c9308"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.034064 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerStarted","Data":"9473241e6718a9e3c8675fd939845d44602dc29093c598900fa0553ed4dd04af"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.046082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2990ecf7909d0b67c2b02f21558ecc1f4995931425625def8d129d9f83b906bd"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.046215 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.051028 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.061526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerStarted","Data":"893536b71487b099e6978c9e457ab6013d3c14588f79171450c1b515337aa553"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.065888 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:45 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:45 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:45 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.065951 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.069338 4698 generic.go:334] "Generic (PLEG): container finished" podID="9a086c8c-4fad-47a3-a463-47ccd03a65fe" containerID="87fe3b77f00b0a1413c2c1ade01d0c3978ce5f5624a6c62c162a7f6cabf422a8" exitCode=0 Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.069423 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9a086c8c-4fad-47a3-a463-47ccd03a65fe","Type":"ContainerDied","Data":"87fe3b77f00b0a1413c2c1ade01d0c3978ce5f5624a6c62c162a7f6cabf422a8"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.072782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerStarted","Data":"460549877fe9e056b079d6dfbace98b3efddcb3bb27cbc9f735e187d22d37a86"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.077366 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"31a410d2-796f-41c9-a0ed-2a214fd5d560","Type":"ContainerStarted","Data":"3b92766c5b4df228afcf517e538fdb743df4539a6bdc9aed72d9b7dd7fc63a7e"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.082306 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c37b66273d21eef9d6c059f4615685a22c2ecdf52eccac82c5f3cfe1b338e6d0"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.084287 4698 generic.go:334] "Generic (PLEG): container finished" podID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerID="465c065be19c59315e19f7fe279ff9934ee6025b5e28cc052feadf4aa0674e63" exitCode=0 Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.084388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerDied","Data":"465c065be19c59315e19f7fe279ff9934ee6025b5e28cc052feadf4aa0674e63"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.084414 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerStarted","Data":"ca952f4728f499ff42a424a345f0de50948fa25564540f8579e898b99ea1e08a"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.088517 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerID="4f09b91fb4fb1a6cd235db53c68e5a8ee1f9e1816e15e198986bc8cfc8d7105d" exitCode=0 Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.088601 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerDied","Data":"4f09b91fb4fb1a6cd235db53c68e5a8ee1f9e1816e15e198986bc8cfc8d7105d"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.088631 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerStarted","Data":"196b0eef9b4068ba08576b5d014f6890d979d692f8f0e0c38df29bab6ad3b71b"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.094559 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-m8slw_f4023d55-2b87-419a-be3e-bab987ba0841/openshift-config-operator/0.log" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.096753 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4023d55-2b87-419a-be3e-bab987ba0841" containerID="bb4730d6cb9fe944cea0e3e254194011326ee700ec716ef541e70f923f805e5f" exitCode=255 Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.096834 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" event={"ID":"f4023d55-2b87-419a-be3e-bab987ba0841","Type":"ContainerDied","Data":"bb4730d6cb9fe944cea0e3e254194011326ee700ec716ef541e70f923f805e5f"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.096861 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" event={"ID":"f4023d55-2b87-419a-be3e-bab987ba0841","Type":"ContainerStarted","Data":"b05931bd923cb89b8e31127557edccbd73c7e8776fcc7adaf63837d5a4cbb68d"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.099141 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.101980 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea8e17613f435fde3232b149ae24308d521828cd95fc40f9a9eeee5bb731e886"} Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.118031 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.118550 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.618536654 +0000 UTC m=+161.295314109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.174902 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.222711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.226893 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.726873027 +0000 UTC m=+161.403650572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.324963 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.325280 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.825263659 +0000 UTC m=+161.502041124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.426977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.427807 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:45.927785609 +0000 UTC m=+161.604563084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.505954 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.530438 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.530582 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.030561317 +0000 UTC m=+161.707338782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.531211 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.531818 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.03180214 +0000 UTC m=+161.708579605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.632559 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.632952 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.132925543 +0000 UTC m=+161.809703008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.633771 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume\") pod \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.633899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlrb\" (UniqueName: \"kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb\") pod \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.634023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume\") pod \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\" (UID: \"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff\") " Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.634552 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" (UID: "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.634840 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.635177 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.135167822 +0000 UTC m=+161.811945277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.635432 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.640782 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb" (OuterVolumeSpecName: "kube-api-access-qhlrb") pod "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" (UID: "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff"). InnerVolumeSpecName "kube-api-access-qhlrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.645087 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" (UID: "4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.736151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.736529 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlrb\" (UniqueName: \"kubernetes.io/projected/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-kube-api-access-qhlrb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.736860 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.23683088 +0000 UTC m=+161.913608375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.737392 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.838348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.838833 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.338811137 +0000 UTC m=+162.015588602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:45 crc kubenswrapper[4698]: I0127 14:31:45.939683 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:45 crc kubenswrapper[4698]: E0127 14:31:45.939984 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.439969381 +0000 UTC m=+162.116746846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.042157 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: E0127 14:31:46.042761 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.542743449 +0000 UTC m=+162.219520914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.066812 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:46 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:46 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:46 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.066892 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.068842 4698 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.135548 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"31a410d2-796f-41c9-a0ed-2a214fd5d560","Type":"ContainerStarted","Data":"27313390df8e2372fe7ea13c0b0c97bec45f3853cba089ecb8e82ddb8d96dcb7"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.143719 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:46 crc kubenswrapper[4698]: E0127 14:31:46.143901 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.643870893 +0000 UTC m=+162.320648358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.143991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: E0127 14:31:46.144368 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:31:46.644356555 +0000 UTC m=+162.321134100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vz5fp" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.146544 4698 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T14:31:46.068884742Z","Handler":null,"Name":""} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.149776 4698 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.149827 4698 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.150663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zft44" event={"ID":"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8","Type":"ContainerStarted","Data":"41cee3a5c4a00ec5670418757e12f19ee94d28b217ca8c212daef9747cf84864"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.150707 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zft44" event={"ID":"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8","Type":"ContainerStarted","Data":"894e89ea018f1b19448df4412fbd8c0a6951e2a1ffb24681d7e3b6e1604ab2f6"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.167597 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z5f9l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]log ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]etcd ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/max-in-flight-filter ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 14:31:46 crc kubenswrapper[4698]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-startinformers ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 14:31:46 crc kubenswrapper[4698]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 14:31:46 crc kubenswrapper[4698]: livez check failed Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.167669 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" podUID="f57848ff-da41-4c6a-9586-c57676b73c90" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.169996 4698 generic.go:334] "Generic (PLEG): container finished" podID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerID="5b03cd7a1f76fa594d0ac34fddbd7c9c367e4e775e079775e432568a47721756" exitCode=0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.170130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerDied","Data":"5b03cd7a1f76fa594d0ac34fddbd7c9c367e4e775e079775e432568a47721756"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.176013 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" event={"ID":"4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff","Type":"ContainerDied","Data":"f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.176033 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.176054 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7319e6841cd87ac2083787f601b57c0d1d48fe7f93d4dd7d26bc597a34aaf06" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.178160 4698 generic.go:334] "Generic (PLEG): container finished" podID="530c77f2-b81c-4835-989c-57b155f04d2c" containerID="7636d7a95cae5f4fceaba819ac0acf2f2e898c9c7641f2d4f94ce5a33a879272" exitCode=0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.178216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerDied","Data":"7636d7a95cae5f4fceaba819ac0acf2f2e898c9c7641f2d4f94ce5a33a879272"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.185153 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerID="823a2dd213974b2b13e2cf8ec3e90dcc631d8e187f4ee3c359a11bc59836c400" exitCode=0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.185223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerDied","Data":"823a2dd213974b2b13e2cf8ec3e90dcc631d8e187f4ee3c359a11bc59836c400"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.198302 4698 generic.go:334] "Generic (PLEG): container finished" podID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerID="0c329e85207520291142714c82a2d50fdfb9d97a3b151c7a5f7de6b2145e6bfb" exitCode=0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.198389 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerDied","Data":"0c329e85207520291142714c82a2d50fdfb9d97a3b151c7a5f7de6b2145e6bfb"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.199363 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=9.199351363 podStartE2EDuration="9.199351363s" podCreationTimestamp="2026-01-27 14:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:46.163297541 +0000 UTC m=+161.840075006" watchObservedRunningTime="2026-01-27 14:31:46.199351363 +0000 UTC m=+161.876128828" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.215774 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" event={"ID":"ce70fddb-4db4-40ef-a5fc-b27412e519bd","Type":"ContainerStarted","Data":"ae701207b7d34a026b22d264cb07f69fa1c10c706f89ba2a89c0407de01c7c9e"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.215826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" event={"ID":"ce70fddb-4db4-40ef-a5fc-b27412e519bd","Type":"ContainerStarted","Data":"b12111c99d12309434e37289b056d93d49066e47b2eebb4b43c2be54453bb7ed"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.216242 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.224778 4698 generic.go:334] "Generic (PLEG): container finished" podID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerID="e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2" exitCode=0 Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.224979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerDied","Data":"e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2"} Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.226102 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.245241 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.271927 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.307959 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" podStartSLOduration=12.307935702 podStartE2EDuration="12.307935702s" podCreationTimestamp="2026-01-27 14:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:46.304334918 +0000 UTC m=+161.981112383" watchObservedRunningTime="2026-01-27 14:31:46.307935702 +0000 UTC m=+161.984713167" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.348914 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.360480 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.360546 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.420710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vz5fp\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.430262 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.430333 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.430338 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.430393 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.493155 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.536403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.660723 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.765476 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir\") pod \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.765554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access\") pod \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\" (UID: \"9a086c8c-4fad-47a3-a463-47ccd03a65fe\") " Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.765585 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a086c8c-4fad-47a3-a463-47ccd03a65fe" (UID: "9a086c8c-4fad-47a3-a463-47ccd03a65fe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.765876 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.773908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a086c8c-4fad-47a3-a463-47ccd03a65fe" (UID: "9a086c8c-4fad-47a3-a463-47ccd03a65fe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:46 crc kubenswrapper[4698]: I0127 14:31:46.867003 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a086c8c-4fad-47a3-a463-47ccd03a65fe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.005510 4698 patch_prober.go:28] interesting pod/console-f9d7485db-cvnrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.005564 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cvnrn" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.013016 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.018351 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bz9jw" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.065215 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:47 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:47 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:47 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.065438 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.243261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9a086c8c-4fad-47a3-a463-47ccd03a65fe","Type":"ContainerDied","Data":"d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037"} Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.243315 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e23c8ceabd311e6dd874c52aed38265f0d34193c354e53da11d31787114037" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.243377 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.251308 4698 generic.go:334] "Generic (PLEG): container finished" podID="31a410d2-796f-41c9-a0ed-2a214fd5d560" containerID="27313390df8e2372fe7ea13c0b0c97bec45f3853cba089ecb8e82ddb8d96dcb7" exitCode=0 Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.251395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"31a410d2-796f-41c9-a0ed-2a214fd5d560","Type":"ContainerDied","Data":"27313390df8e2372fe7ea13c0b0c97bec45f3853cba089ecb8e82ddb8d96dcb7"} Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.269318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zft44" event={"ID":"d758ca6f-86ae-44b7-bfcb-8e7f2e2205e8","Type":"ContainerStarted","Data":"d8841834748241366007199e6384971c7083d30e2247f475cbb752270850991c"} Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.292082 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.296273 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zft44" podStartSLOduration=23.296235043 podStartE2EDuration="23.296235043s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:47.293301686 +0000 UTC m=+162.970079171" watchObservedRunningTime="2026-01-27 14:31:47.296235043 +0000 UTC m=+162.973012528" Jan 27 14:31:47 crc kubenswrapper[4698]: W0127 14:31:47.304997 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb1d6ca9_58c3_4d0f_9b6f_e9dad08632b8.slice/crio-20dbf4987d8ba1764c6ef92fbeb85c51aefc92ad4d2fc4fecc4e5c0d4fdb463a WatchSource:0}: Error finding container 20dbf4987d8ba1764c6ef92fbeb85c51aefc92ad4d2fc4fecc4e5c0d4fdb463a: Status 404 returned error can't find the container with id 20dbf4987d8ba1764c6ef92fbeb85c51aefc92ad4d2fc4fecc4e5c0d4fdb463a Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.377018 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pcr8w" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.411927 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qfxjx" Jan 27 14:31:47 crc kubenswrapper[4698]: I0127 14:31:47.430837 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.066203 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:48 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:48 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:48 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.066581 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.246409 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8slw" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.283173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" event={"ID":"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8","Type":"ContainerStarted","Data":"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf"} Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.283229 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" event={"ID":"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8","Type":"ContainerStarted","Data":"20dbf4987d8ba1764c6ef92fbeb85c51aefc92ad4d2fc4fecc4e5c0d4fdb463a"} Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.309732 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" podStartSLOduration=140.309716421 podStartE2EDuration="2m20.309716421s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:48.30315465 +0000 UTC m=+163.979932125" watchObservedRunningTime="2026-01-27 14:31:48.309716421 +0000 UTC m=+163.986493886" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.547281 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.708151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access\") pod \"31a410d2-796f-41c9-a0ed-2a214fd5d560\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.708299 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir\") pod \"31a410d2-796f-41c9-a0ed-2a214fd5d560\" (UID: \"31a410d2-796f-41c9-a0ed-2a214fd5d560\") " Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.708502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31a410d2-796f-41c9-a0ed-2a214fd5d560" (UID: "31a410d2-796f-41c9-a0ed-2a214fd5d560"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.721897 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31a410d2-796f-41c9-a0ed-2a214fd5d560" (UID: "31a410d2-796f-41c9-a0ed-2a214fd5d560"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.810001 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a410d2-796f-41c9-a0ed-2a214fd5d560-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:48 crc kubenswrapper[4698]: I0127 14:31:48.810037 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a410d2-796f-41c9-a0ed-2a214fd5d560-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.064673 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:49 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:49 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:49 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.064747 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.315166 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"31a410d2-796f-41c9-a0ed-2a214fd5d560","Type":"ContainerDied","Data":"3b92766c5b4df228afcf517e538fdb743df4539a6bdc9aed72d9b7dd7fc63a7e"} Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.315204 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.315212 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b92766c5b4df228afcf517e538fdb743df4539a6bdc9aed72d9b7dd7fc63a7e" Jan 27 14:31:49 crc kubenswrapper[4698]: I0127 14:31:49.315292 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:31:50 crc kubenswrapper[4698]: I0127 14:31:50.063017 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:50 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:50 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:50 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:50 crc kubenswrapper[4698]: I0127 14:31:50.063082 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:50 crc kubenswrapper[4698]: I0127 14:31:50.532667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:50 crc kubenswrapper[4698]: I0127 14:31:50.539111 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/621bb20d-2ffa-4e89-b522-d04b4764fcc3-metrics-certs\") pod \"network-metrics-daemon-lpvsw\" (UID: \"621bb20d-2ffa-4e89-b522-d04b4764fcc3\") " pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:50 crc kubenswrapper[4698]: I0127 14:31:50.807107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpvsw" Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.064027 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:51 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:51 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:51 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.064089 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.088423 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lpvsw"] Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.161566 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.168968 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z5f9l" Jan 27 14:31:51 crc kubenswrapper[4698]: I0127 14:31:51.341486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" event={"ID":"621bb20d-2ffa-4e89-b522-d04b4764fcc3","Type":"ContainerStarted","Data":"005a6237136b58147dc9241533df60c38e86dc2c84145221a9a95e3626497e6a"} Jan 27 14:31:52 crc kubenswrapper[4698]: I0127 14:31:52.064265 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:52 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:52 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:52 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:52 crc kubenswrapper[4698]: I0127 14:31:52.064339 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:52 crc kubenswrapper[4698]: I0127 14:31:52.349145 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" event={"ID":"621bb20d-2ffa-4e89-b522-d04b4764fcc3","Type":"ContainerStarted","Data":"6a8b0c3189b2886a122ab0d47fa14f531bf9a99255570f6743f051d0430a14d0"} Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.064315 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:53 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:53 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:53 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.064820 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.755572 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.756415 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" containerID="cri-o://ae701207b7d34a026b22d264cb07f69fa1c10c706f89ba2a89c0407de01c7c9e" gracePeriod=30 Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.758750 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:31:53 crc kubenswrapper[4698]: I0127 14:31:53.759017 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" containerID="cri-o://14e848d33d0f4786239a63d3a84bfd67c1f315319adbe7d8241d81cc42d53c62" gracePeriod=30 Jan 27 14:31:54 crc kubenswrapper[4698]: I0127 14:31:54.065133 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:54 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:54 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:54 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:54 crc kubenswrapper[4698]: I0127 14:31:54.065370 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:54 crc kubenswrapper[4698]: I0127 14:31:54.912972 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xstml container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 27 14:31:54 crc kubenswrapper[4698]: I0127 14:31:54.913048 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 27 14:31:55 crc kubenswrapper[4698]: I0127 14:31:55.064013 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:55 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:55 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:55 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:55 crc kubenswrapper[4698]: I0127 14:31:55.064140 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:55 crc kubenswrapper[4698]: I0127 14:31:55.375533 4698 generic.go:334] "Generic (PLEG): container finished" podID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerID="14e848d33d0f4786239a63d3a84bfd67c1f315319adbe7d8241d81cc42d53c62" exitCode=0 Jan 27 14:31:55 crc kubenswrapper[4698]: I0127 14:31:55.375577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" event={"ID":"c8964e6d-30b9-4402-b132-105cb5a1695b","Type":"ContainerDied","Data":"14e848d33d0f4786239a63d3a84bfd67c1f315319adbe7d8241d81cc42d53c62"} Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.065165 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:56 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:56 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:56 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.065267 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.385128 4698 generic.go:334] "Generic (PLEG): container finished" podID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerID="ae701207b7d34a026b22d264cb07f69fa1c10c706f89ba2a89c0407de01c7c9e" exitCode=0 Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.385184 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" event={"ID":"ce70fddb-4db4-40ef-a5fc-b27412e519bd","Type":"ContainerDied","Data":"ae701207b7d34a026b22d264cb07f69fa1c10c706f89ba2a89c0407de01c7c9e"} Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432027 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432377 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432031 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432441 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432471 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432927 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.432957 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.433128 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"95b352ab240f637939137882dfdfe55e57ef9176f53c2f7301858e50c6dcfdae"} pod="openshift-console/downloads-7954f5f757-bdrpp" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.433169 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" containerID="cri-o://95b352ab240f637939137882dfdfe55e57ef9176f53c2f7301858e50c6dcfdae" gracePeriod=2 Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.991601 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p5fgd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 14:31:56 crc kubenswrapper[4698]: I0127 14:31:56.991675 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.005416 4698 patch_prober.go:28] interesting pod/console-f9d7485db-cvnrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.005736 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cvnrn" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.063765 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:57 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:57 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:57 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.063836 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.395457 4698 generic.go:334] "Generic (PLEG): container finished" podID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerID="95b352ab240f637939137882dfdfe55e57ef9176f53c2f7301858e50c6dcfdae" exitCode=0 Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.395493 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bdrpp" event={"ID":"64b274f6-5293-4c0e-a51a-dca8518c5a40","Type":"ContainerDied","Data":"95b352ab240f637939137882dfdfe55e57ef9176f53c2f7301858e50c6dcfdae"} Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.452403 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:31:57 crc kubenswrapper[4698]: I0127 14:31:57.452456 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:31:58 crc kubenswrapper[4698]: I0127 14:31:58.063777 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:58 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:58 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:58 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:58 crc kubenswrapper[4698]: I0127 14:31:58.063850 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:31:59 crc kubenswrapper[4698]: I0127 14:31:59.064478 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:31:59 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:31:59 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:31:59 crc kubenswrapper[4698]: healthz check failed Jan 27 14:31:59 crc kubenswrapper[4698]: I0127 14:31:59.064549 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:00 crc kubenswrapper[4698]: I0127 14:32:00.063524 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:00 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:00 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:00 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:00 crc kubenswrapper[4698]: I0127 14:32:00.063856 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:01 crc kubenswrapper[4698]: I0127 14:32:01.064994 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:01 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:01 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:01 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:01 crc kubenswrapper[4698]: I0127 14:32:01.065059 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.065688 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:02 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:02 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:02 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.066105 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.932331 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.964799 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:02 crc kubenswrapper[4698]: E0127 14:32:02.967353 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a086c8c-4fad-47a3-a463-47ccd03a65fe" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967371 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a086c8c-4fad-47a3-a463-47ccd03a65fe" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: E0127 14:32:02.967384 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" containerName="collect-profiles" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967391 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" containerName="collect-profiles" Jan 27 14:32:02 crc kubenswrapper[4698]: E0127 14:32:02.967405 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967412 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" Jan 27 14:32:02 crc kubenswrapper[4698]: E0127 14:32:02.967421 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a410d2-796f-41c9-a0ed-2a214fd5d560" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967429 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a410d2-796f-41c9-a0ed-2a214fd5d560" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967553 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" containerName="route-controller-manager" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967564 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a086c8c-4fad-47a3-a463-47ccd03a65fe" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967575 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" containerName="collect-profiles" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.967585 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a410d2-796f-41c9-a0ed-2a214fd5d560" containerName="pruner" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.968034 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:02 crc kubenswrapper[4698]: I0127 14:32:02.978006 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.001814 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert\") pod \"c8964e6d-30b9-4402-b132-105cb5a1695b\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.002059 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config\") pod \"c8964e6d-30b9-4402-b132-105cb5a1695b\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.002170 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca\") pod \"c8964e6d-30b9-4402-b132-105cb5a1695b\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.002421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6622\" (UniqueName: \"kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622\") pod \"c8964e6d-30b9-4402-b132-105cb5a1695b\" (UID: \"c8964e6d-30b9-4402-b132-105cb5a1695b\") " Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.005501 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c8964e6d-30b9-4402-b132-105cb5a1695b" (UID: "c8964e6d-30b9-4402-b132-105cb5a1695b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.010446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622" (OuterVolumeSpecName: "kube-api-access-n6622") pod "c8964e6d-30b9-4402-b132-105cb5a1695b" (UID: "c8964e6d-30b9-4402-b132-105cb5a1695b"). InnerVolumeSpecName "kube-api-access-n6622". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.012960 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8964e6d-30b9-4402-b132-105cb5a1695b" (UID: "c8964e6d-30b9-4402-b132-105cb5a1695b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.045458 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config" (OuterVolumeSpecName: "config") pod "c8964e6d-30b9-4402-b132-105cb5a1695b" (UID: "c8964e6d-30b9-4402-b132-105cb5a1695b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.065283 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:03 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:03 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:03 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.065359 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.104889 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjwck\" (UniqueName: \"kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106420 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106525 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8964e6d-30b9-4402-b132-105cb5a1695b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106665 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106752 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8964e6d-30b9-4402-b132-105cb5a1695b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.106816 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6622\" (UniqueName: \"kubernetes.io/projected/c8964e6d-30b9-4402-b132-105cb5a1695b-kube-api-access-n6622\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.207892 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.207953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjwck\" (UniqueName: \"kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.208049 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.208095 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.209267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.209939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.214965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.228727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjwck\" (UniqueName: \"kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck\") pod \"route-controller-manager-6895945445-45zqc\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.302275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.463763 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" event={"ID":"c8964e6d-30b9-4402-b132-105cb5a1695b","Type":"ContainerDied","Data":"789fe8d9a233baa12ea3c30f508a67bbb920e477a703502073875796a0389bbc"} Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.463817 4698 scope.go:117] "RemoveContainer" containerID="14e848d33d0f4786239a63d3a84bfd67c1f315319adbe7d8241d81cc42d53c62" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.463932 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd" Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.494112 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:32:03 crc kubenswrapper[4698]: I0127 14:32:03.498466 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5fgd"] Jan 27 14:32:04 crc kubenswrapper[4698]: I0127 14:32:04.063962 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:04 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:04 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:04 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:04 crc kubenswrapper[4698]: I0127 14:32:04.064013 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:05 crc kubenswrapper[4698]: I0127 14:32:05.001334 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8964e6d-30b9-4402-b132-105cb5a1695b" path="/var/lib/kubelet/pods/c8964e6d-30b9-4402-b132-105cb5a1695b/volumes" Jan 27 14:32:05 crc kubenswrapper[4698]: I0127 14:32:05.064501 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:05 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:05 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:05 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:05 crc kubenswrapper[4698]: I0127 14:32:05.064557 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:05 crc kubenswrapper[4698]: I0127 14:32:05.913608 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xstml container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:32:05 crc kubenswrapper[4698]: I0127 14:32:05.913700 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:32:06 crc kubenswrapper[4698]: I0127 14:32:06.064272 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:06 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:06 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:06 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:06 crc kubenswrapper[4698]: I0127 14:32:06.064378 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:06 crc kubenswrapper[4698]: I0127 14:32:06.431035 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:06 crc kubenswrapper[4698]: I0127 14:32:06.431146 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:06 crc kubenswrapper[4698]: I0127 14:32:06.542542 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:32:07 crc kubenswrapper[4698]: I0127 14:32:07.005814 4698 patch_prober.go:28] interesting pod/console-f9d7485db-cvnrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 27 14:32:07 crc kubenswrapper[4698]: I0127 14:32:07.005869 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cvnrn" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 27 14:32:07 crc kubenswrapper[4698]: I0127 14:32:07.063172 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:07 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:07 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:07 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:07 crc kubenswrapper[4698]: I0127 14:32:07.063572 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:07 crc kubenswrapper[4698]: I0127 14:32:07.405399 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-p4z9k" Jan 27 14:32:08 crc kubenswrapper[4698]: I0127 14:32:08.064096 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:08 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:08 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:08 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:08 crc kubenswrapper[4698]: I0127 14:32:08.064505 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.064478 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:09 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:09 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:09 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.064573 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.350051 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.378430 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:09 crc kubenswrapper[4698]: E0127 14:32:09.378707 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.378722 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.378846 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" containerName="controller-manager" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.379261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.392506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca\") pod \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.392578 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtkk7\" (UniqueName: \"kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7\") pod \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.392624 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config\") pod \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.392692 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert\") pod \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.392717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles\") pod \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\" (UID: \"ce70fddb-4db4-40ef-a5fc-b27412e519bd\") " Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.393843 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ce70fddb-4db4-40ef-a5fc-b27412e519bd" (UID: "ce70fddb-4db4-40ef-a5fc-b27412e519bd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.393927 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce70fddb-4db4-40ef-a5fc-b27412e519bd" (UID: "ce70fddb-4db4-40ef-a5fc-b27412e519bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.394460 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config" (OuterVolumeSpecName: "config") pod "ce70fddb-4db4-40ef-a5fc-b27412e519bd" (UID: "ce70fddb-4db4-40ef-a5fc-b27412e519bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.399868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7" (OuterVolumeSpecName: "kube-api-access-dtkk7") pod "ce70fddb-4db4-40ef-a5fc-b27412e519bd" (UID: "ce70fddb-4db4-40ef-a5fc-b27412e519bd"). InnerVolumeSpecName "kube-api-access-dtkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.408983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce70fddb-4db4-40ef-a5fc-b27412e519bd" (UID: "ce70fddb-4db4-40ef-a5fc-b27412e519bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.412831 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500560 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500657 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500694 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhs85\" (UniqueName: \"kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500815 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500901 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500921 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtkk7\" (UniqueName: \"kubernetes.io/projected/ce70fddb-4db4-40ef-a5fc-b27412e519bd-kube-api-access-dtkk7\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500939 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500951 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce70fddb-4db4-40ef-a5fc-b27412e519bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.500962 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce70fddb-4db4-40ef-a5fc-b27412e519bd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.509461 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" event={"ID":"ce70fddb-4db4-40ef-a5fc-b27412e519bd","Type":"ContainerDied","Data":"b12111c99d12309434e37289b056d93d49066e47b2eebb4b43c2be54453bb7ed"} Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.509545 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xstml" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.532052 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.537525 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xstml"] Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.602511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.602610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.602659 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.602686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhs85\" (UniqueName: \"kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.602732 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.611129 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:09 crc kubenswrapper[4698]: I0127 14:32:09.619992 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhs85\" (UniqueName: \"kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.064096 4698 patch_prober.go:28] interesting pod/router-default-5444994796-8bg4r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:32:10 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Jan 27 14:32:10 crc kubenswrapper[4698]: [+]process-running ok Jan 27 14:32:10 crc kubenswrapper[4698]: healthz check failed Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.064438 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8bg4r" podUID="79cd2a60-54ce-46a5-96cd-53bb078fa804" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.448296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.448767 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.451494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config\") pod \"controller-manager-76c89f475f-hlbdh\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:10 crc kubenswrapper[4698]: I0127 14:32:10.621178 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:11 crc kubenswrapper[4698]: I0127 14:32:11.007234 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce70fddb-4db4-40ef-a5fc-b27412e519bd" path="/var/lib/kubelet/pods/ce70fddb-4db4-40ef-a5fc-b27412e519bd/volumes" Jan 27 14:32:11 crc kubenswrapper[4698]: I0127 14:32:11.065097 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:32:11 crc kubenswrapper[4698]: I0127 14:32:11.067991 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8bg4r" Jan 27 14:32:13 crc kubenswrapper[4698]: I0127 14:32:13.754824 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:13 crc kubenswrapper[4698]: I0127 14:32:13.828369 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:15 crc kubenswrapper[4698]: I0127 14:32:15.173973 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod3d3d75e2-1fec-4458-9cb7-3472250b0b49"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod3d3d75e2-1fec-4458-9cb7-3472250b0b49] : Timed out while waiting for systemd to remove kubepods-burstable-pod3d3d75e2_1fec_4458_9cb7_3472250b0b49.slice" Jan 27 14:32:15 crc kubenswrapper[4698]: E0127 14:32:15.174034 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod3d3d75e2-1fec-4458-9cb7-3472250b0b49] : unable to destroy cgroup paths for cgroup [kubepods burstable pod3d3d75e2-1fec-4458-9cb7-3472250b0b49] : Timed out while waiting for systemd to remove kubepods-burstable-pod3d3d75e2_1fec_4458_9cb7_3472250b0b49.slice" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" Jan 27 14:32:15 crc kubenswrapper[4698]: I0127 14:32:15.539568 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj2hq" Jan 27 14:32:15 crc kubenswrapper[4698]: I0127 14:32:15.558870 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:32:15 crc kubenswrapper[4698]: I0127 14:32:15.564839 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj2hq"] Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.004784 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.005591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.010497 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.011476 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.016666 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.088621 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.088796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.189914 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.189981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.190096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.209749 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.330454 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.430119 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:16 crc kubenswrapper[4698]: I0127 14:32:16.430181 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:17 crc kubenswrapper[4698]: I0127 14:32:17.002713 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3d75e2-1fec-4458-9cb7-3472250b0b49" path="/var/lib/kubelet/pods/3d3d75e2-1fec-4458-9cb7-3472250b0b49/volumes" Jan 27 14:32:17 crc kubenswrapper[4698]: I0127 14:32:17.013920 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:32:17 crc kubenswrapper[4698]: I0127 14:32:17.017702 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.009908 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.011164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.013958 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.138342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.138432 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.138470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.239880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.239950 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.239986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.240106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.240109 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.265908 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access\") pod \"installer-9-crc\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:20 crc kubenswrapper[4698]: I0127 14:32:20.340398 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:32:23 crc kubenswrapper[4698]: I0127 14:32:23.033084 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.701602 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.701801 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz2cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dhlmg_openshift-marketplace(b5b88242-64d6-469e-a5e4-bc8bab680ded): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.703016 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dhlmg" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.860759 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.860986 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jq9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9t9sp_openshift-marketplace(7f32c526-aea0-4758-a1ea-d0a694af3573): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:24 crc kubenswrapper[4698]: E0127 14:32:24.862227 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9t9sp" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" Jan 27 14:32:26 crc kubenswrapper[4698]: I0127 14:32:26.431491 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:26 crc kubenswrapper[4698]: I0127 14:32:26.431559 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:26 crc kubenswrapper[4698]: E0127 14:32:26.740359 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9t9sp" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" Jan 27 14:32:26 crc kubenswrapper[4698]: E0127 14:32:26.740387 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dhlmg" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" Jan 27 14:32:27 crc kubenswrapper[4698]: I0127 14:32:27.452461 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:32:27 crc kubenswrapper[4698]: I0127 14:32:27.452532 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:32:28 crc kubenswrapper[4698]: E0127 14:32:28.053541 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 14:32:28 crc kubenswrapper[4698]: E0127 14:32:28.054267 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dd6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gxkvv_openshift-marketplace(e47fa643-2257-49e0-8b1e-77f9d3165c0e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:28 crc kubenswrapper[4698]: E0127 14:32:28.055704 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gxkvv" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" Jan 27 14:32:31 crc kubenswrapper[4698]: E0127 14:32:31.839221 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 14:32:31 crc kubenswrapper[4698]: E0127 14:32:31.839683 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrklc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-462jr_openshift-marketplace(530c77f2-b81c-4835-989c-57b155f04d2c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:31 crc kubenswrapper[4698]: E0127 14:32:31.840923 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-462jr" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.126972 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-462jr" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.127323 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gxkvv" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.136875 4698 scope.go:117] "RemoveContainer" containerID="ae701207b7d34a026b22d264cb07f69fa1c10c706f89ba2a89c0407de01c7c9e" Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.211758 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.212414 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtfm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cdp6k_openshift-marketplace(882a9575-2eeb-4f8e-812c-2419b499a07e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.213721 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cdp6k" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.618106 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:34 crc kubenswrapper[4698]: W0127 14:32:34.621234 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec17e53_8595_4cce_b8f3_5834e196236e.slice/crio-b60ecc8ad22e1b8e539bdd2eb546ddcea7a7b2511b92508e40c044899fda5361 WatchSource:0}: Error finding container b60ecc8ad22e1b8e539bdd2eb546ddcea7a7b2511b92508e40c044899fda5361: Status 404 returned error can't find the container with id b60ecc8ad22e1b8e539bdd2eb546ddcea7a7b2511b92508e40c044899fda5361 Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.642725 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" event={"ID":"1ec17e53-8595-4cce-b8f3-5834e196236e","Type":"ContainerStarted","Data":"b60ecc8ad22e1b8e539bdd2eb546ddcea7a7b2511b92508e40c044899fda5361"} Jan 27 14:32:34 crc kubenswrapper[4698]: E0127 14:32:34.646854 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cdp6k" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.679939 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.684175 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:34 crc kubenswrapper[4698]: I0127 14:32:34.690224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:32:34 crc kubenswrapper[4698]: W0127 14:32:34.700339 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80dd4ea0_b68b_4d73_a851_b8d024f85590.slice/crio-cf11b679938ca9efb5880a3fac58b8972e3638214ae23b643234848292dd84c0 WatchSource:0}: Error finding container cf11b679938ca9efb5880a3fac58b8972e3638214ae23b643234848292dd84c0: Status 404 returned error can't find the container with id cf11b679938ca9efb5880a3fac58b8972e3638214ae23b643234848292dd84c0 Jan 27 14:32:34 crc kubenswrapper[4698]: W0127 14:32:34.700914 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda0e3606b_0ab2_4f0c_94a6_e06b173ecdae.slice/crio-adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45 WatchSource:0}: Error finding container adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45: Status 404 returned error can't find the container with id adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45 Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.653007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae","Type":"ContainerStarted","Data":"ca771b20a439c7d3ab5305bd442a23848ff917d3d8fe22f0e7ed3b1eb9c2e7f5"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.653334 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae","Type":"ContainerStarted","Data":"adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.654908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpvsw" event={"ID":"621bb20d-2ffa-4e89-b522-d04b4764fcc3","Type":"ContainerStarted","Data":"8f8b2c2b0e124af8d2c46107871625dc7c9157be53dd7145099ab625243b1202"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.657708 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" event={"ID":"80dd4ea0-b68b-4d73-a851-b8d024f85590","Type":"ContainerStarted","Data":"0b056a6a95119b8c8067930e686bb7c1330e40af99bce2ac49bd7f57aecdbd6b"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.657752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" event={"ID":"80dd4ea0-b68b-4d73-a851-b8d024f85590","Type":"ContainerStarted","Data":"cf11b679938ca9efb5880a3fac58b8972e3638214ae23b643234848292dd84c0"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.659365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" event={"ID":"1ec17e53-8595-4cce-b8f3-5834e196236e","Type":"ContainerStarted","Data":"4a43b083e5e65cdeb327be5d59d8ec00f56bb5d37e88f28d646de26258860e29"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.660756 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"974a0417-d9d3-48b2-931d-c2c3830481db","Type":"ContainerStarted","Data":"4f9e0b0dda6f88323b6e4638a308d0e1c4ec279c61f19c1cc1cfdb541fa1e908"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.660785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"974a0417-d9d3-48b2-931d-c2c3830481db","Type":"ContainerStarted","Data":"d157f36a4a4ab353ae23c0a92e2d6c643cb00bbbcbc9d11b136e85307a38c6c5"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.662684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bdrpp" event={"ID":"64b274f6-5293-4c0e-a51a-dca8518c5a40","Type":"ContainerStarted","Data":"e8e15d8526cddc42928c0811ef9f2ec347fb5ab23ac2bb37cc015df6629df58c"} Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.662969 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.663255 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:35 crc kubenswrapper[4698]: I0127 14:32:35.663301 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.430104 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.430185 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.430130 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.430245 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.518148 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.518296 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26hbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mkhfh_openshift-marketplace(6947cad8-3436-4bc3-8bda-c2c1a4972402): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.519555 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mkhfh" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.668684 4698 generic.go:334] "Generic (PLEG): container finished" podID="a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" containerID="ca771b20a439c7d3ab5305bd442a23848ff917d3d8fe22f0e7ed3b1eb9c2e7f5" exitCode=0 Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.669860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae","Type":"ContainerDied","Data":"ca771b20a439c7d3ab5305bd442a23848ff917d3d8fe22f0e7ed3b1eb9c2e7f5"} Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.669966 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" podUID="80dd4ea0-b68b-4d73-a851-b8d024f85590" containerName="controller-manager" containerID="cri-o://0b056a6a95119b8c8067930e686bb7c1330e40af99bce2ac49bd7f57aecdbd6b" gracePeriod=30 Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.671412 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" podUID="1ec17e53-8595-4cce-b8f3-5834e196236e" containerName="route-controller-manager" containerID="cri-o://4a43b083e5e65cdeb327be5d59d8ec00f56bb5d37e88f28d646de26258860e29" gracePeriod=30 Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.671972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.671996 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.672102 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.672127 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.680364 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.682057 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.732026 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lpvsw" podStartSLOduration=188.732007365 podStartE2EDuration="3m8.732007365s" podCreationTimestamp="2026-01-27 14:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:36.722967607 +0000 UTC m=+212.399745072" watchObservedRunningTime="2026-01-27 14:32:36.732007365 +0000 UTC m=+212.408784830" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.740530 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" podStartSLOduration=43.740511441 podStartE2EDuration="43.740511441s" podCreationTimestamp="2026-01-27 14:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:36.736649409 +0000 UTC m=+212.413426884" watchObservedRunningTime="2026-01-27 14:32:36.740511441 +0000 UTC m=+212.417288906" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.756088 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" podStartSLOduration=43.756065072 podStartE2EDuration="43.756065072s" podCreationTimestamp="2026-01-27 14:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:36.754030908 +0000 UTC m=+212.430808373" watchObservedRunningTime="2026-01-27 14:32:36.756065072 +0000 UTC m=+212.432842537" Jan 27 14:32:36 crc kubenswrapper[4698]: I0127 14:32:36.770261 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.770243437 podStartE2EDuration="17.770243437s" podCreationTimestamp="2026-01-27 14:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:36.768461949 +0000 UTC m=+212.445239424" watchObservedRunningTime="2026-01-27 14:32:36.770243437 +0000 UTC m=+212.447020902" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.888573 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.888793 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmm9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9m8xd_openshift-marketplace(d62f9471-7fdf-459f-8e3b-cadad2b6a542): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:32:36 crc kubenswrapper[4698]: E0127 14:32:36.889968 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9m8xd" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.684876 4698 generic.go:334] "Generic (PLEG): container finished" podID="80dd4ea0-b68b-4d73-a851-b8d024f85590" containerID="0b056a6a95119b8c8067930e686bb7c1330e40af99bce2ac49bd7f57aecdbd6b" exitCode=0 Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.684970 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" event={"ID":"80dd4ea0-b68b-4d73-a851-b8d024f85590","Type":"ContainerDied","Data":"0b056a6a95119b8c8067930e686bb7c1330e40af99bce2ac49bd7f57aecdbd6b"} Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.687260 4698 generic.go:334] "Generic (PLEG): container finished" podID="1ec17e53-8595-4cce-b8f3-5834e196236e" containerID="4a43b083e5e65cdeb327be5d59d8ec00f56bb5d37e88f28d646de26258860e29" exitCode=0 Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.687304 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" event={"ID":"1ec17e53-8595-4cce-b8f3-5834e196236e","Type":"ContainerDied","Data":"4a43b083e5e65cdeb327be5d59d8ec00f56bb5d37e88f28d646de26258860e29"} Jan 27 14:32:38 crc kubenswrapper[4698]: E0127 14:32:38.775400 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mkhfh" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" Jan 27 14:32:38 crc kubenswrapper[4698]: E0127 14:32:38.775412 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9m8xd" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.835989 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.894180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir\") pod \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.894368 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access\") pod \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\" (UID: \"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae\") " Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.894615 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" (UID: "a0e3606b-0ab2-4f0c-94a6-e06b173ecdae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.895001 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.900757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" (UID: "a0e3606b-0ab2-4f0c-94a6-e06b173ecdae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:38 crc kubenswrapper[4698]: I0127 14:32:38.996472 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0e3606b-0ab2-4f0c-94a6-e06b173ecdae-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.449139 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.488909 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.502587 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhs85\" (UniqueName: \"kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85\") pod \"80dd4ea0-b68b-4d73-a851-b8d024f85590\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.502663 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca\") pod \"80dd4ea0-b68b-4d73-a851-b8d024f85590\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.502723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert\") pod \"80dd4ea0-b68b-4d73-a851-b8d024f85590\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.502763 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config\") pod \"80dd4ea0-b68b-4d73-a851-b8d024f85590\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.502804 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles\") pod \"80dd4ea0-b68b-4d73-a851-b8d024f85590\" (UID: \"80dd4ea0-b68b-4d73-a851-b8d024f85590\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.503917 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "80dd4ea0-b68b-4d73-a851-b8d024f85590" (UID: "80dd4ea0-b68b-4d73-a851-b8d024f85590"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.504565 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config" (OuterVolumeSpecName: "config") pod "80dd4ea0-b68b-4d73-a851-b8d024f85590" (UID: "80dd4ea0-b68b-4d73-a851-b8d024f85590"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.504953 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca" (OuterVolumeSpecName: "client-ca") pod "80dd4ea0-b68b-4d73-a851-b8d024f85590" (UID: "80dd4ea0-b68b-4d73-a851-b8d024f85590"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.507936 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "80dd4ea0-b68b-4d73-a851-b8d024f85590" (UID: "80dd4ea0-b68b-4d73-a851-b8d024f85590"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.516884 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85" (OuterVolumeSpecName: "kube-api-access-lhs85") pod "80dd4ea0-b68b-4d73-a851-b8d024f85590" (UID: "80dd4ea0-b68b-4d73-a851-b8d024f85590"). InnerVolumeSpecName "kube-api-access-lhs85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.604446 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca\") pod \"1ec17e53-8595-4cce-b8f3-5834e196236e\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.604564 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config\") pod \"1ec17e53-8595-4cce-b8f3-5834e196236e\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.604594 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert\") pod \"1ec17e53-8595-4cce-b8f3-5834e196236e\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.604622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjwck\" (UniqueName: \"kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck\") pod \"1ec17e53-8595-4cce-b8f3-5834e196236e\" (UID: \"1ec17e53-8595-4cce-b8f3-5834e196236e\") " Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.605001 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhs85\" (UniqueName: \"kubernetes.io/projected/80dd4ea0-b68b-4d73-a851-b8d024f85590-kube-api-access-lhs85\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.605030 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.605042 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd4ea0-b68b-4d73-a851-b8d024f85590-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.605054 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.605066 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd4ea0-b68b-4d73-a851-b8d024f85590-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.606025 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config" (OuterVolumeSpecName: "config") pod "1ec17e53-8595-4cce-b8f3-5834e196236e" (UID: "1ec17e53-8595-4cce-b8f3-5834e196236e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.606400 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ec17e53-8595-4cce-b8f3-5834e196236e" (UID: "1ec17e53-8595-4cce-b8f3-5834e196236e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.608930 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck" (OuterVolumeSpecName: "kube-api-access-hjwck") pod "1ec17e53-8595-4cce-b8f3-5834e196236e" (UID: "1ec17e53-8595-4cce-b8f3-5834e196236e"). InnerVolumeSpecName "kube-api-access-hjwck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.609747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ec17e53-8595-4cce-b8f3-5834e196236e" (UID: "1ec17e53-8595-4cce-b8f3-5834e196236e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.695129 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a0e3606b-0ab2-4f0c-94a6-e06b173ecdae","Type":"ContainerDied","Data":"adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45"} Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.695166 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbf6f2d079a65aecefb2a1c1dc3f3829c8887c9e75a6880065dba9195555e45" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.695182 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.697383 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerStarted","Data":"17085beb162035c19dea302be8e1146e17450ddd3b6ac5ab7a62cf5fc0533c7b"} Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.699819 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" event={"ID":"80dd4ea0-b68b-4d73-a851-b8d024f85590","Type":"ContainerDied","Data":"cf11b679938ca9efb5880a3fac58b8972e3638214ae23b643234848292dd84c0"} Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.699844 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c89f475f-hlbdh" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.699873 4698 scope.go:117] "RemoveContainer" containerID="0b056a6a95119b8c8067930e686bb7c1330e40af99bce2ac49bd7f57aecdbd6b" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.705566 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" event={"ID":"1ec17e53-8595-4cce-b8f3-5834e196236e","Type":"ContainerDied","Data":"b60ecc8ad22e1b8e539bdd2eb546ddcea7a7b2511b92508e40c044899fda5361"} Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.705615 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6895945445-45zqc" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.705741 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.705770 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec17e53-8595-4cce-b8f3-5834e196236e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.706153 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec17e53-8595-4cce-b8f3-5834e196236e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.706271 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjwck\" (UniqueName: \"kubernetes.io/projected/1ec17e53-8595-4cce-b8f3-5834e196236e-kube-api-access-hjwck\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.717176 4698 scope.go:117] "RemoveContainer" containerID="4a43b083e5e65cdeb327be5d59d8ec00f56bb5d37e88f28d646de26258860e29" Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.733389 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.736326 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76c89f475f-hlbdh"] Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.746525 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:39 crc kubenswrapper[4698]: I0127 14:32:39.749344 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6895945445-45zqc"] Jan 27 14:32:40 crc kubenswrapper[4698]: I0127 14:32:40.712514 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerID="17085beb162035c19dea302be8e1146e17450ddd3b6ac5ab7a62cf5fc0533c7b" exitCode=0 Jan 27 14:32:40 crc kubenswrapper[4698]: I0127 14:32:40.712605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerDied","Data":"17085beb162035c19dea302be8e1146e17450ddd3b6ac5ab7a62cf5fc0533c7b"} Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.002656 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec17e53-8595-4cce-b8f3-5834e196236e" path="/var/lib/kubelet/pods/1ec17e53-8595-4cce-b8f3-5834e196236e/volumes" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.003425 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dd4ea0-b68b-4d73-a851-b8d024f85590" path="/var/lib/kubelet/pods/80dd4ea0-b68b-4d73-a851-b8d024f85590/volumes" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.622943 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:32:41 crc kubenswrapper[4698]: E0127 14:32:41.623198 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dd4ea0-b68b-4d73-a851-b8d024f85590" containerName="controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623211 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd4ea0-b68b-4d73-a851-b8d024f85590" containerName="controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: E0127 14:32:41.623225 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec17e53-8595-4cce-b8f3-5834e196236e" containerName="route-controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623232 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec17e53-8595-4cce-b8f3-5834e196236e" containerName="route-controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: E0127 14:32:41.623252 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" containerName="pruner" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623263 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" containerName="pruner" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623373 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dd4ea0-b68b-4d73-a851-b8d024f85590" containerName="controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623386 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e3606b-0ab2-4f0c-94a6-e06b173ecdae" containerName="pruner" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623394 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec17e53-8595-4cce-b8f3-5834e196236e" containerName="route-controller-manager" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.623861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.626493 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.626669 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.626546 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.627071 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.627410 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.629673 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.631073 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.632392 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.644001 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.644706 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.645880 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.646357 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.648206 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.651650 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.652307 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.663343 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.678712 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.729697 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730379 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km6d9\" (UniqueName: \"kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730513 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730622 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.730958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.731058 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.731161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8g5\" (UniqueName: \"kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832696 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832724 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8g5\" (UniqueName: \"kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.832776 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.833046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.833082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km6d9\" (UniqueName: \"kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.833123 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.833926 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.833952 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.834292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.835213 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.837104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.837950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.843271 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.851600 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8g5\" (UniqueName: \"kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5\") pod \"controller-manager-7d88d59755-5lcjd\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.852180 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km6d9\" (UniqueName: \"kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9\") pod \"route-controller-manager-589f878698-vjwmk\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.952503 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:41 crc kubenswrapper[4698]: I0127 14:32:41.972216 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:42 crc kubenswrapper[4698]: I0127 14:32:42.130187 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:32:42 crc kubenswrapper[4698]: I0127 14:32:42.190719 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:32:42 crc kubenswrapper[4698]: W0127 14:32:42.197490 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7211c73_92a4_463e_8d9b_25638f41a7dd.slice/crio-a0f9442e6bb5e38f722b57182ecbea33f9a4b95ed02a4b6d65684fdcecb5ef81 WatchSource:0}: Error finding container a0f9442e6bb5e38f722b57182ecbea33f9a4b95ed02a4b6d65684fdcecb5ef81: Status 404 returned error can't find the container with id a0f9442e6bb5e38f722b57182ecbea33f9a4b95ed02a4b6d65684fdcecb5ef81 Jan 27 14:32:42 crc kubenswrapper[4698]: I0127 14:32:42.728620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" event={"ID":"b7211c73-92a4-463e-8d9b-25638f41a7dd","Type":"ContainerStarted","Data":"a0f9442e6bb5e38f722b57182ecbea33f9a4b95ed02a4b6d65684fdcecb5ef81"} Jan 27 14:32:42 crc kubenswrapper[4698]: I0127 14:32:42.730181 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" event={"ID":"ba1000e3-5241-4bdd-91f4-dddc45fc0a07","Type":"ContainerStarted","Data":"7e2e41951d57973a89f5fba9a30a7d1da911dc03ae6744a73464892b94695281"} Jan 27 14:32:43 crc kubenswrapper[4698]: I0127 14:32:43.737611 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" event={"ID":"b7211c73-92a4-463e-8d9b-25638f41a7dd","Type":"ContainerStarted","Data":"e74cc166a6640c1aad3b28a90e6b3cbde0f5c861ea9328ba115ea1724a6f0f37"} Jan 27 14:32:43 crc kubenswrapper[4698]: I0127 14:32:43.738123 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:43 crc kubenswrapper[4698]: I0127 14:32:43.739771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" event={"ID":"ba1000e3-5241-4bdd-91f4-dddc45fc0a07","Type":"ContainerStarted","Data":"4edc3a0f340cf21d0bd2016836059e07ed3ce95eee61b526756d836d954243d0"} Jan 27 14:32:43 crc kubenswrapper[4698]: I0127 14:32:43.748859 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:32:43 crc kubenswrapper[4698]: I0127 14:32:43.758283 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" podStartSLOduration=30.758265171 podStartE2EDuration="30.758265171s" podCreationTimestamp="2026-01-27 14:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:43.757335857 +0000 UTC m=+219.434113322" watchObservedRunningTime="2026-01-27 14:32:43.758265171 +0000 UTC m=+219.435042636" Jan 27 14:32:44 crc kubenswrapper[4698]: I0127 14:32:44.744669 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:44 crc kubenswrapper[4698]: I0127 14:32:44.750380 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:32:44 crc kubenswrapper[4698]: I0127 14:32:44.768562 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" podStartSLOduration=31.768541194 podStartE2EDuration="31.768541194s" podCreationTimestamp="2026-01-27 14:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:44.763474441 +0000 UTC m=+220.440251916" watchObservedRunningTime="2026-01-27 14:32:44.768541194 +0000 UTC m=+220.445318659" Jan 27 14:32:46 crc kubenswrapper[4698]: I0127 14:32:46.430015 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:46 crc kubenswrapper[4698]: I0127 14:32:46.430113 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:46 crc kubenswrapper[4698]: I0127 14:32:46.430045 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-bdrpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 27 14:32:46 crc kubenswrapper[4698]: I0127 14:32:46.430213 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bdrpp" podUID="64b274f6-5293-4c0e-a51a-dca8518c5a40" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 27 14:32:56 crc kubenswrapper[4698]: I0127 14:32:56.447038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bdrpp" Jan 27 14:32:57 crc kubenswrapper[4698]: I0127 14:32:57.452034 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:32:57 crc kubenswrapper[4698]: I0127 14:32:57.452367 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:32:57 crc kubenswrapper[4698]: I0127 14:32:57.452423 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:32:57 crc kubenswrapper[4698]: I0127 14:32:57.453000 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:32:57 crc kubenswrapper[4698]: I0127 14:32:57.453055 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9" gracePeriod=600 Jan 27 14:32:58 crc kubenswrapper[4698]: I0127 14:32:58.811782 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9" exitCode=0 Jan 27 14:32:58 crc kubenswrapper[4698]: I0127 14:32:58.811948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.837059 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerStarted","Data":"f82faca3f5636bdbc761ab89485503d874426f26edbbe73195bc9dfa132ed985"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.845630 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.857822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerStarted","Data":"239e275db7c0fb0067627388de9c378c0a51d1f1e41784af4c60e4d6a2aba280"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.864215 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerStarted","Data":"aead90a59be5d8f4c184b33b0fb72877c8ac9fa37798f84dc25c3727ca66bf83"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.881281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerStarted","Data":"dddb12715176e9a3150ec652cdf5da005f1a1d1c36b03a4380e96779b8c745ea"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.882815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerStarted","Data":"425a2847620ba4922c31ce894dbf30b724a250adffce28c16f40b91d52222438"} Jan 27 14:32:59 crc kubenswrapper[4698]: I0127 14:32:59.889742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerStarted","Data":"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.910037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerDied","Data":"aead90a59be5d8f4c184b33b0fb72877c8ac9fa37798f84dc25c3727ca66bf83"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.909928 4698 generic.go:334] "Generic (PLEG): container finished" podID="530c77f2-b81c-4835-989c-57b155f04d2c" containerID="aead90a59be5d8f4c184b33b0fb72877c8ac9fa37798f84dc25c3727ca66bf83" exitCode=0 Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.925275 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerStarted","Data":"6a91f1c98558c985094715122b03310fbfa74ae0dcc0d5061d8a5af31d53248f"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.931268 4698 generic.go:334] "Generic (PLEG): container finished" podID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerID="25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced" exitCode=0 Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.931345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerDied","Data":"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.933446 4698 generic.go:334] "Generic (PLEG): container finished" podID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerID="f82faca3f5636bdbc761ab89485503d874426f26edbbe73195bc9dfa132ed985" exitCode=0 Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.933484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerDied","Data":"f82faca3f5636bdbc761ab89485503d874426f26edbbe73195bc9dfa132ed985"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.935369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerStarted","Data":"b7f7dd314d3d2d41d458de4261adf0eac8fa6f9ce2b55c097411c7d0b7e11066"} Jan 27 14:33:00 crc kubenswrapper[4698]: I0127 14:33:00.941297 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdgff" podStartSLOduration=13.243376421 podStartE2EDuration="1m25.941283401s" podCreationTimestamp="2026-01-27 14:31:35 +0000 UTC" firstStartedPulling="2026-01-27 14:31:46.188274683 +0000 UTC m=+161.865052148" lastFinishedPulling="2026-01-27 14:32:58.886181663 +0000 UTC m=+234.562959128" observedRunningTime="2026-01-27 14:33:00.001985604 +0000 UTC m=+235.678763089" watchObservedRunningTime="2026-01-27 14:33:00.941283401 +0000 UTC m=+236.618060866" Jan 27 14:33:01 crc kubenswrapper[4698]: I0127 14:33:01.942916 4698 generic.go:334] "Generic (PLEG): container finished" podID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerID="425a2847620ba4922c31ce894dbf30b724a250adffce28c16f40b91d52222438" exitCode=0 Jan 27 14:33:01 crc kubenswrapper[4698]: I0127 14:33:01.942979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerDied","Data":"425a2847620ba4922c31ce894dbf30b724a250adffce28c16f40b91d52222438"} Jan 27 14:33:01 crc kubenswrapper[4698]: I0127 14:33:01.945200 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerID="239e275db7c0fb0067627388de9c378c0a51d1f1e41784af4c60e4d6a2aba280" exitCode=0 Jan 27 14:33:01 crc kubenswrapper[4698]: I0127 14:33:01.945253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerDied","Data":"239e275db7c0fb0067627388de9c378c0a51d1f1e41784af4c60e4d6a2aba280"} Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.048049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerStarted","Data":"06d8412c4d5bd31f7f4979e3862a04a4e5bbc3414da496ebc93f1765890e7ef0"} Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.048617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerStarted","Data":"5f4f2b8bfea6881493931b100114c2e33da7f225d3731b773088b4c892456f39"} Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.053413 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerStarted","Data":"c9ccbf0601ee4a6e8b2de602863d85390dab405b1bc9f10502e8e007926f8ecb"} Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.056750 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerStarted","Data":"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448"} Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.094808 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxkvv" podStartSLOduration=10.150583422 podStartE2EDuration="1m27.09479114s" podCreationTimestamp="2026-01-27 14:31:36 +0000 UTC" firstStartedPulling="2026-01-27 14:31:45.086542228 +0000 UTC m=+160.763319693" lastFinishedPulling="2026-01-27 14:33:02.030749946 +0000 UTC m=+237.707527411" observedRunningTime="2026-01-27 14:33:03.093984949 +0000 UTC m=+238.770762424" watchObservedRunningTime="2026-01-27 14:33:03.09479114 +0000 UTC m=+238.771568605" Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.096975 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dhlmg" podStartSLOduration=11.729808008 podStartE2EDuration="1m29.096964588s" podCreationTimestamp="2026-01-27 14:31:34 +0000 UTC" firstStartedPulling="2026-01-27 14:31:45.091860706 +0000 UTC m=+160.768638201" lastFinishedPulling="2026-01-27 14:33:02.459017316 +0000 UTC m=+238.135794781" observedRunningTime="2026-01-27 14:33:03.071098274 +0000 UTC m=+238.747875739" watchObservedRunningTime="2026-01-27 14:33:03.096964588 +0000 UTC m=+238.773742053" Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.127475 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdp6k" podStartSLOduration=12.141666741 podStartE2EDuration="1m28.127453063s" podCreationTimestamp="2026-01-27 14:31:35 +0000 UTC" firstStartedPulling="2026-01-27 14:31:46.22789257 +0000 UTC m=+161.904670035" lastFinishedPulling="2026-01-27 14:33:02.213678892 +0000 UTC m=+237.890456357" observedRunningTime="2026-01-27 14:33:03.123228072 +0000 UTC m=+238.800005547" watchObservedRunningTime="2026-01-27 14:33:03.127453063 +0000 UTC m=+238.804230538" Jan 27 14:33:03 crc kubenswrapper[4698]: I0127 14:33:03.150179 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-462jr" podStartSLOduration=10.199254996 podStartE2EDuration="1m26.150162184s" podCreationTimestamp="2026-01-27 14:31:37 +0000 UTC" firstStartedPulling="2026-01-27 14:31:46.183006266 +0000 UTC m=+161.859783741" lastFinishedPulling="2026-01-27 14:33:02.133913464 +0000 UTC m=+237.810690929" observedRunningTime="2026-01-27 14:33:03.149493576 +0000 UTC m=+238.826271041" watchObservedRunningTime="2026-01-27 14:33:03.150162184 +0000 UTC m=+238.826939659" Jan 27 14:33:04 crc kubenswrapper[4698]: I0127 14:33:04.065194 4698 generic.go:334] "Generic (PLEG): container finished" podID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerID="6a91f1c98558c985094715122b03310fbfa74ae0dcc0d5061d8a5af31d53248f" exitCode=0 Jan 27 14:33:04 crc kubenswrapper[4698]: I0127 14:33:04.065309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerDied","Data":"6a91f1c98558c985094715122b03310fbfa74ae0dcc0d5061d8a5af31d53248f"} Jan 27 14:33:04 crc kubenswrapper[4698]: I0127 14:33:04.070984 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerStarted","Data":"eb8741b773b76750d824f71d2335a9dd8415008a2a1af0cc3be54d36ce6b66d8"} Jan 27 14:33:04 crc kubenswrapper[4698]: I0127 14:33:04.075317 4698 generic.go:334] "Generic (PLEG): container finished" podID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerID="b7f7dd314d3d2d41d458de4261adf0eac8fa6f9ce2b55c097411c7d0b7e11066" exitCode=0 Jan 27 14:33:04 crc kubenswrapper[4698]: I0127 14:33:04.075381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerDied","Data":"b7f7dd314d3d2d41d458de4261adf0eac8fa6f9ce2b55c097411c7d0b7e11066"} Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.083489 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerStarted","Data":"5beb2772304d366fb72d95e3813094d4b0581bc5fcafc053b5e547336d5c8bc3"} Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.108021 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t9sp" podStartSLOduration=14.632175052000001 podStartE2EDuration="1m31.108000863s" podCreationTimestamp="2026-01-27 14:31:34 +0000 UTC" firstStartedPulling="2026-01-27 14:31:46.215890935 +0000 UTC m=+161.892668400" lastFinishedPulling="2026-01-27 14:33:02.691716746 +0000 UTC m=+238.368494211" observedRunningTime="2026-01-27 14:33:04.131105062 +0000 UTC m=+239.807882527" watchObservedRunningTime="2026-01-27 14:33:05.108000863 +0000 UTC m=+240.784778328" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.127724 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.127800 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.334330 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.334387 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.526002 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.526070 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.629953 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.631425 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.634592 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.657402 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9m8xd" podStartSLOduration=9.141384169 podStartE2EDuration="1m28.657380683s" podCreationTimestamp="2026-01-27 14:31:37 +0000 UTC" firstStartedPulling="2026-01-27 14:31:45.050707191 +0000 UTC m=+160.727484656" lastFinishedPulling="2026-01-27 14:33:04.566703695 +0000 UTC m=+240.243481170" observedRunningTime="2026-01-27 14:33:05.112137272 +0000 UTC m=+240.788914737" watchObservedRunningTime="2026-01-27 14:33:05.657380683 +0000 UTC m=+241.334158148" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.725140 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.725191 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:05 crc kubenswrapper[4698]: I0127 14:33:05.793702 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:06 crc kubenswrapper[4698]: I0127 14:33:06.093475 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerStarted","Data":"e136dd8f1a1411189e758324102053c3ca2297943085efb3869e9ffe195bd70f"} Jan 27 14:33:06 crc kubenswrapper[4698]: I0127 14:33:06.113283 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkhfh" podStartSLOduration=9.561889925 podStartE2EDuration="1m28.113261043s" podCreationTimestamp="2026-01-27 14:31:38 +0000 UTC" firstStartedPulling="2026-01-27 14:31:46.173007365 +0000 UTC m=+161.849784830" lastFinishedPulling="2026-01-27 14:33:04.724378483 +0000 UTC m=+240.401155948" observedRunningTime="2026-01-27 14:33:06.113210812 +0000 UTC m=+241.789988287" watchObservedRunningTime="2026-01-27 14:33:06.113261043 +0000 UTC m=+241.790038508" Jan 27 14:33:06 crc kubenswrapper[4698]: I0127 14:33:06.137185 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.143589 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.329142 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.329483 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.373681 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.720189 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.720301 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:07 crc kubenswrapper[4698]: I0127 14:33:07.834859 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:08 crc kubenswrapper[4698]: I0127 14:33:08.148185 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:08 crc kubenswrapper[4698]: I0127 14:33:08.155134 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:33:08 crc kubenswrapper[4698]: I0127 14:33:08.330105 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:33:08 crc kubenswrapper[4698]: I0127 14:33:08.330208 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:33:09 crc kubenswrapper[4698]: I0127 14:33:09.376680 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9m8xd" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="registry-server" probeResult="failure" output=< Jan 27 14:33:09 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 14:33:09 crc kubenswrapper[4698]: > Jan 27 14:33:09 crc kubenswrapper[4698]: I0127 14:33:09.886875 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:33:09 crc kubenswrapper[4698]: I0127 14:33:09.887557 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdgff" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="registry-server" containerID="cri-o://dddb12715176e9a3150ec652cdf5da005f1a1d1c36b03a4380e96779b8c745ea" gracePeriod=2 Jan 27 14:33:10 crc kubenswrapper[4698]: I0127 14:33:10.487134 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-462jr"] Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.126911 4698 generic.go:334] "Generic (PLEG): container finished" podID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerID="dddb12715176e9a3150ec652cdf5da005f1a1d1c36b03a4380e96779b8c745ea" exitCode=0 Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.127016 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerDied","Data":"dddb12715176e9a3150ec652cdf5da005f1a1d1c36b03a4380e96779b8c745ea"} Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.127500 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-462jr" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="registry-server" containerID="cri-o://c9ccbf0601ee4a6e8b2de602863d85390dab405b1bc9f10502e8e007926f8ecb" gracePeriod=2 Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.321964 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.475990 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content\") pod \"f2031067-c690-4330-98dc-ff9259ccbb2f\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.476078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities\") pod \"f2031067-c690-4330-98dc-ff9259ccbb2f\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.476158 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jqjs\" (UniqueName: \"kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs\") pod \"f2031067-c690-4330-98dc-ff9259ccbb2f\" (UID: \"f2031067-c690-4330-98dc-ff9259ccbb2f\") " Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.477283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities" (OuterVolumeSpecName: "utilities") pod "f2031067-c690-4330-98dc-ff9259ccbb2f" (UID: "f2031067-c690-4330-98dc-ff9259ccbb2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.485075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs" (OuterVolumeSpecName: "kube-api-access-2jqjs") pod "f2031067-c690-4330-98dc-ff9259ccbb2f" (UID: "f2031067-c690-4330-98dc-ff9259ccbb2f"). InnerVolumeSpecName "kube-api-access-2jqjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.537047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2031067-c690-4330-98dc-ff9259ccbb2f" (UID: "f2031067-c690-4330-98dc-ff9259ccbb2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.577491 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jqjs\" (UniqueName: \"kubernetes.io/projected/f2031067-c690-4330-98dc-ff9259ccbb2f-kube-api-access-2jqjs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.577535 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:11 crc kubenswrapper[4698]: I0127 14:33:11.577544 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2031067-c690-4330-98dc-ff9259ccbb2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.134870 4698 generic.go:334] "Generic (PLEG): container finished" podID="530c77f2-b81c-4835-989c-57b155f04d2c" containerID="c9ccbf0601ee4a6e8b2de602863d85390dab405b1bc9f10502e8e007926f8ecb" exitCode=0 Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.135030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerDied","Data":"c9ccbf0601ee4a6e8b2de602863d85390dab405b1bc9f10502e8e007926f8ecb"} Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.137454 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdgff" event={"ID":"f2031067-c690-4330-98dc-ff9259ccbb2f","Type":"ContainerDied","Data":"460549877fe9e056b079d6dfbace98b3efddcb3bb27cbc9f735e187d22d37a86"} Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.137490 4698 scope.go:117] "RemoveContainer" containerID="dddb12715176e9a3150ec652cdf5da005f1a1d1c36b03a4380e96779b8c745ea" Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.137522 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdgff" Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.159315 4698 scope.go:117] "RemoveContainer" containerID="17085beb162035c19dea302be8e1146e17450ddd3b6ac5ab7a62cf5fc0533c7b" Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.175798 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.177568 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdgff"] Jan 27 14:33:12 crc kubenswrapper[4698]: I0127 14:33:12.193009 4698 scope.go:117] "RemoveContainer" containerID="823a2dd213974b2b13e2cf8ec3e90dcc631d8e187f4ee3c359a11bc59836c400" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:12.998193 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" path="/var/lib/kubelet/pods/f2031067-c690-4330-98dc-ff9259ccbb2f/volumes" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.031618 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.031883 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="extract-utilities" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.031895 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="extract-utilities" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.031909 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="extract-content" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.031915 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="extract-content" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.031933 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="registry-server" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.031941 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="registry-server" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032051 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2031067-c690-4330-98dc-ff9259ccbb2f" containerName="registry-server" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032347 4698 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032594 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de" gracePeriod=15 Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032672 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150" gracePeriod=15 Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032618 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032680 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba" gracePeriod=15 Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032677 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a" gracePeriod=15 Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.032702 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb" gracePeriod=15 Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033203 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033543 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033560 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033572 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033579 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033593 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033601 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033613 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033621 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033655 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033663 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033677 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033685 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.033699 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033707 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033816 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033832 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033841 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033855 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.033866 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.034077 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.130050 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197624 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197858 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197911 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197945 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.197987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.300767 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.300917 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.300948 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.300988 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301149 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301926 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301970 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.302047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.312137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.301977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.364365 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.365535 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.366093 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.366588 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.424875 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:33:13 crc kubenswrapper[4698]: E0127 14:33:13.444244 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9d0f7c9a6ab1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:33:13.443748529 +0000 UTC m=+249.120525994,LastTimestamp:2026-01-27 14:33:13.443748529 +0000 UTC m=+249.120525994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.503450 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities\") pod \"530c77f2-b81c-4835-989c-57b155f04d2c\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.503496 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content\") pod \"530c77f2-b81c-4835-989c-57b155f04d2c\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.503529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrklc\" (UniqueName: \"kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc\") pod \"530c77f2-b81c-4835-989c-57b155f04d2c\" (UID: \"530c77f2-b81c-4835-989c-57b155f04d2c\") " Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.504858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities" (OuterVolumeSpecName: "utilities") pod "530c77f2-b81c-4835-989c-57b155f04d2c" (UID: "530c77f2-b81c-4835-989c-57b155f04d2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.506342 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc" (OuterVolumeSpecName: "kube-api-access-mrklc") pod "530c77f2-b81c-4835-989c-57b155f04d2c" (UID: "530c77f2-b81c-4835-989c-57b155f04d2c"). InnerVolumeSpecName "kube-api-access-mrklc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.515547 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.515619 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.525381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "530c77f2-b81c-4835-989c-57b155f04d2c" (UID: "530c77f2-b81c-4835-989c-57b155f04d2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.555485 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.556051 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.556320 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.556663 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.557051 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.604734 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.604776 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530c77f2-b81c-4835-989c-57b155f04d2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.604790 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrklc\" (UniqueName: \"kubernetes.io/projected/530c77f2-b81c-4835-989c-57b155f04d2c-kube-api-access-mrklc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.662956 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.663303 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.954196 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 14:33:13 crc kubenswrapper[4698]: I0127 14:33:13.954322 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.188907 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.190184 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.190853 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb" exitCode=0 Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.190872 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba" exitCode=2 Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.192040 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"631d5889f2180bd96cbb71a4f1cc2423b39ee2c5e25ebb4d4bef7a9ee224ff25"} Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.193963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-462jr" event={"ID":"530c77f2-b81c-4835-989c-57b155f04d2c","Type":"ContainerDied","Data":"893536b71487b099e6978c9e457ab6013d3c14588f79171450c1b515337aa553"} Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.194006 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-462jr" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.194025 4698 scope.go:117] "RemoveContainer" containerID="c9ccbf0601ee4a6e8b2de602863d85390dab405b1bc9f10502e8e007926f8ecb" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.194842 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.195249 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.195587 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.196036 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.207978 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.208348 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.208831 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.209146 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.210889 4698 scope.go:117] "RemoveContainer" containerID="aead90a59be5d8f4c184b33b0fb72877c8ac9fa37798f84dc25c3727ca66bf83" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.231062 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.231745 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.232073 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.232260 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.232400 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.242970 4698 scope.go:117] "RemoveContainer" containerID="7636d7a95cae5f4fceaba819ac0acf2f2e898c9c7641f2d4f94ce5a33a879272" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.756243 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.756738 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.757133 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.757417 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.757850 4698 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.757915 4698 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.758321 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Jan 27 14:33:14 crc kubenswrapper[4698]: E0127 14:33:14.959843 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.994323 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.994867 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.995276 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:14 crc kubenswrapper[4698]: I0127 14:33:14.995541 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.217331 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.222005 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.223136 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150" exitCode=0 Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.223170 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a" exitCode=0 Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.223240 4698 scope.go:117] "RemoveContainer" containerID="9e780429dcca871712b55832b0cd0b3e78b7343569cfa71a87bbb2be1bd3f129" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.228185 4698 generic.go:334] "Generic (PLEG): container finished" podID="974a0417-d9d3-48b2-931d-c2c3830481db" containerID="4f9e0b0dda6f88323b6e4638a308d0e1c4ec279c61f19c1cc1cfdb541fa1e908" exitCode=0 Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.228265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"974a0417-d9d3-48b2-931d-c2c3830481db","Type":"ContainerDied","Data":"4f9e0b0dda6f88323b6e4638a308d0e1c4ec279c61f19c1cc1cfdb541fa1e908"} Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.228928 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.229135 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.229349 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.229725 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.232207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08"} Jan 27 14:33:15 crc kubenswrapper[4698]: E0127 14:33:15.360375 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.378794 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.379524 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.379768 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.379991 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.380238 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.380449 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.584964 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.585802 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.586258 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.586602 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.587173 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.587451 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:15 crc kubenswrapper[4698]: I0127 14:33:15.587811 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: E0127 14:33:16.160810 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.248032 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.249902 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.250579 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.251170 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.251892 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.252515 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.252984 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.601852 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.602651 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.603055 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.604238 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.604426 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.604615 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.604822 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.754565 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock\") pod \"974a0417-d9d3-48b2-931d-c2c3830481db\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.754625 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access\") pod \"974a0417-d9d3-48b2-931d-c2c3830481db\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.754695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock" (OuterVolumeSpecName: "var-lock") pod "974a0417-d9d3-48b2-931d-c2c3830481db" (UID: "974a0417-d9d3-48b2-931d-c2c3830481db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.754769 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir\") pod \"974a0417-d9d3-48b2-931d-c2c3830481db\" (UID: \"974a0417-d9d3-48b2-931d-c2c3830481db\") " Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.754853 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "974a0417-d9d3-48b2-931d-c2c3830481db" (UID: "974a0417-d9d3-48b2-931d-c2c3830481db"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.755042 4698 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.755063 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974a0417-d9d3-48b2-931d-c2c3830481db-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.759161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "974a0417-d9d3-48b2-931d-c2c3830481db" (UID: "974a0417-d9d3-48b2-931d-c2c3830481db"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:16 crc kubenswrapper[4698]: I0127 14:33:16.856711 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974a0417-d9d3-48b2-931d-c2c3830481db-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.257133 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.257840 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de" exitCode=0 Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.259204 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"974a0417-d9d3-48b2-931d-c2c3830481db","Type":"ContainerDied","Data":"d157f36a4a4ab353ae23c0a92e2d6c643cb00bbbcbc9d11b136e85307a38c6c5"} Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.259240 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.259251 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d157f36a4a4ab353ae23c0a92e2d6c643cb00bbbcbc9d11b136e85307a38c6c5" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.262695 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.263078 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.263251 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.263536 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.263913 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.264117 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.529431 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.530557 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.531235 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.531592 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.531992 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.532242 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.532554 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.532816 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.533082 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668122 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668227 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668265 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668263 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668406 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668546 4698 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668558 4698 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:17 crc kubenswrapper[4698]: I0127 14:33:17.668595 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:17 crc kubenswrapper[4698]: E0127 14:33:17.762410 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.268181 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.269088 4698 scope.go:117] "RemoveContainer" containerID="b9ee5f5f6bb400359196a28fe605c173c38223c8982a991750c91601aeace150" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.269240 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.288187 4698 scope.go:117] "RemoveContainer" containerID="a526cfa90f2ba7d074aa6007c909e68f72d5e7a4b91d893c955e7460ff208cfb" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.289022 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.289285 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.289509 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.289776 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.290156 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.290483 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.290727 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.303740 4698 scope.go:117] "RemoveContainer" containerID="c9525d1a6372b19bc2f30eb4e8e522833badb6319bac25ecae77b3313acf1b1a" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.325386 4698 scope.go:117] "RemoveContainer" containerID="5e384a95fdadc32733a7213ba468af64a1a33e7dd08722cbffde3ad0c461d8ba" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.339725 4698 scope.go:117] "RemoveContainer" containerID="dbbe49aa918c711f9ec638a3fb42ef3be38a9ed3bc0cdde639ba8bba11eab4de" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.354759 4698 scope.go:117] "RemoveContainer" containerID="5e1a3e5a71b3094750c736335cd70f45977566e58702231351d9d6b195a0cb4b" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.372888 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.373430 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.373849 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.374129 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.374420 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.374687 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.374953 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.375155 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.375390 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.412102 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.412921 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.414554 4698 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.415150 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.415581 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.415958 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.416423 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.417421 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4698]: I0127 14:33:18.417839 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:19 crc kubenswrapper[4698]: I0127 14:33:19.001358 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 14:33:20 crc kubenswrapper[4698]: E0127 14:33:20.010575 4698 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" volumeName="registry-storage" Jan 27 14:33:20 crc kubenswrapper[4698]: E0127 14:33:20.963725 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="6.4s" Jan 27 14:33:22 crc kubenswrapper[4698]: E0127 14:33:22.984926 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9d0f7c9a6ab1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:33:13.443748529 +0000 UTC m=+249.120525994,LastTimestamp:2026-01-27 14:33:13.443748529 +0000 UTC m=+249.120525994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.992045 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.995845 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.996155 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.996380 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.997529 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.998153 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.998699 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.998943 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.999269 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:24 crc kubenswrapper[4698]: I0127 14:33:24.999463 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:24.999661 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.001766 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.002004 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.002303 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.002573 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.009706 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.009736 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:25 crc kubenswrapper[4698]: E0127 14:33:25.010096 4698 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.010693 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:25 crc kubenswrapper[4698]: W0127 14:33:25.031687 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-951996c2c88118eb8fa76ed7ce16ba9c5e3e118c8d2f26ea821a4329709740a5 WatchSource:0}: Error finding container 951996c2c88118eb8fa76ed7ce16ba9c5e3e118c8d2f26ea821a4329709740a5: Status 404 returned error can't find the container with id 951996c2c88118eb8fa76ed7ce16ba9c5e3e118c8d2f26ea821a4329709740a5 Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.309525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7705a7ca52941adeb04b4af357505caf8172f6305772294ee7c46337cc44f63"} Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.309905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"951996c2c88118eb8fa76ed7ce16ba9c5e3e118c8d2f26ea821a4329709740a5"} Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.310149 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.310167 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:25 crc kubenswrapper[4698]: E0127 14:33:25.310417 4698 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.310654 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.311451 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.311724 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.312914 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.313444 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.313866 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:25 crc kubenswrapper[4698]: I0127 14:33:25.314277 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.316491 4698 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b7705a7ca52941adeb04b4af357505caf8172f6305772294ee7c46337cc44f63" exitCode=0 Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.316568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b7705a7ca52941adeb04b4af357505caf8172f6305772294ee7c46337cc44f63"} Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.317260 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.317294 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.318331 4698 status_manager.go:851] "Failed to get status for pod" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" pod="openshift-marketplace/redhat-operators-9m8xd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9m8xd\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: E0127 14:33:26.318393 4698 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.318758 4698 status_manager.go:851] "Failed to get status for pod" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" pod="openshift-marketplace/certified-operators-cdp6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cdp6k\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.319247 4698 status_manager.go:851] "Failed to get status for pod" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" pod="openshift-marketplace/community-operators-9t9sp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t9sp\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.319598 4698 status_manager.go:851] "Failed to get status for pod" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" pod="openshift-marketplace/redhat-marketplace-462jr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-462jr\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.319925 4698 status_manager.go:851] "Failed to get status for pod" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.320248 4698 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:26 crc kubenswrapper[4698]: I0127 14:33:26.320723 4698 status_manager.go:851] "Failed to get status for pod" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" pod="openshift-marketplace/redhat-operators-mkhfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mkhfh\": dial tcp 38.102.83.212:6443: connect: connection refused" Jan 27 14:33:27 crc kubenswrapper[4698]: I0127 14:33:27.328313 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f11a07a6788dc72feabd23357ce079df4f1df3aa6ee817b6180eeb987afc8b91"} Jan 27 14:33:27 crc kubenswrapper[4698]: I0127 14:33:27.328869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"132d0087011db9a4461a6f5b074a21d4a7c05e60ae0420e81587443835fba3c4"} Jan 27 14:33:27 crc kubenswrapper[4698]: I0127 14:33:27.328881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c234c953b7b224c38efc84acd8cb4cd5ebbff670f6ce33ba6b05f63a3f4643e"} Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.353868 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.354213 4698 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333" exitCode=1 Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.354316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333"} Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.354836 4698 scope.go:117] "RemoveContainer" containerID="66b4b527d7836ceed419f9b8a1d851047efc8c052c110ac857316a6a73444333" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.361261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b9969bf03a4d5040ab46412a39de1948cc4d0efe7e270a85694d5f9b2f517e5d"} Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.361302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21d8fa951258616987c537bc187d452f1b886ff67732922ccc1539842bbec6d8"} Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.361565 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.361588 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.361797 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.399467 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:28 crc kubenswrapper[4698]: I0127 14:33:28.775326 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:29 crc kubenswrapper[4698]: I0127 14:33:29.370335 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:33:29 crc kubenswrapper[4698]: I0127 14:33:29.370396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e6ecce289f3689cf89eb7f23ce861bb19e6941dee699b2608e23d14b57afaa2"} Jan 27 14:33:30 crc kubenswrapper[4698]: I0127 14:33:30.010868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:30 crc kubenswrapper[4698]: I0127 14:33:30.011504 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:30 crc kubenswrapper[4698]: I0127 14:33:30.017863 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:33 crc kubenswrapper[4698]: I0127 14:33:33.370678 4698 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:33 crc kubenswrapper[4698]: I0127 14:33:33.390616 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:33 crc kubenswrapper[4698]: I0127 14:33:33.390678 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:33 crc kubenswrapper[4698]: I0127 14:33:33.394903 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:34 crc kubenswrapper[4698]: I0127 14:33:34.397194 4698 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:34 crc kubenswrapper[4698]: I0127 14:33:34.397250 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0ce1118e-e5ad-4adb-8d50-c758116b45ec" Jan 27 14:33:35 crc kubenswrapper[4698]: I0127 14:33:35.026455 4698 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b25a832-304d-4389-9e53-fa980e0cca0e" Jan 27 14:33:36 crc kubenswrapper[4698]: I0127 14:33:36.891848 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:36 crc kubenswrapper[4698]: I0127 14:33:36.895708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:37 crc kubenswrapper[4698]: I0127 14:33:37.413162 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:38 crc kubenswrapper[4698]: I0127 14:33:38.420854 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.185730 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.227664 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.428088 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.682104 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.705752 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:33:44 crc kubenswrapper[4698]: I0127 14:33:44.798654 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:33:45 crc kubenswrapper[4698]: I0127 14:33:45.483112 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:33:45 crc kubenswrapper[4698]: I0127 14:33:45.563419 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:33:45 crc kubenswrapper[4698]: I0127 14:33:45.613455 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:33:45 crc kubenswrapper[4698]: I0127 14:33:45.624613 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:33:45 crc kubenswrapper[4698]: I0127 14:33:45.849492 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.025058 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.174300 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.187401 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.222583 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.285408 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.390877 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.439621 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.499433 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.642572 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:33:46 crc kubenswrapper[4698]: I0127 14:33:46.837577 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.005860 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.012558 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.134463 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.166035 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.260342 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.304280 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.307245 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.356927 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.539382 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.630668 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:33:47 crc kubenswrapper[4698]: I0127 14:33:47.657204 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.004042 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.033383 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.043952 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.093852 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.212211 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.424344 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.426397 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.475213 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.477950 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.586692 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.658539 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.724231 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.843545 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.858952 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.895297 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.927884 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:33:48 crc kubenswrapper[4698]: I0127 14:33:48.957029 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.056771 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.213836 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.218028 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.237412 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.254141 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.282860 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.287964 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.339379 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.404794 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.576786 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.651943 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.666551 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.667983 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.674649 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.674616431 podStartE2EDuration="36.674616431s" podCreationTimestamp="2026-01-27 14:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:33.472618992 +0000 UTC m=+269.149396457" watchObservedRunningTime="2026-01-27 14:33:49.674616431 +0000 UTC m=+285.351393896" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.676534 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-462jr"] Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.676588 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.681539 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.695205 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.695178061 podStartE2EDuration="16.695178061s" podCreationTimestamp="2026-01-27 14:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:49.693305719 +0000 UTC m=+285.370083184" watchObservedRunningTime="2026-01-27 14:33:49.695178061 +0000 UTC m=+285.371955526" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.722360 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.731277 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.817136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.823827 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.890493 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.914876 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.974477 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:33:49 crc kubenswrapper[4698]: I0127 14:33:49.978031 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.028560 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.028833 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.052219 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.060822 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.199310 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.268552 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.346038 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.376346 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.385073 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.418788 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.440734 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.459710 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.573262 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.575723 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.575849 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.601743 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.612539 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.653207 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.710669 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.741484 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.853818 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.954127 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.961728 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:33:50 crc kubenswrapper[4698]: I0127 14:33:50.999151 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" path="/var/lib/kubelet/pods/530c77f2-b81c-4835-989c-57b155f04d2c/volumes" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.298820 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.318080 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.318278 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.351149 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.368531 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.464782 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.469516 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.538870 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.544530 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.557108 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.565109 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.579753 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.616871 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.730348 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.768390 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.861786 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.868182 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:33:51 crc kubenswrapper[4698]: I0127 14:33:51.951607 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.033797 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.067807 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.116187 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.145048 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.242871 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.332237 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.335479 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.347357 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.476201 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.498158 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.518341 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.600551 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.630769 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.738131 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.762334 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.825129 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.833511 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.852579 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.930078 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.989132 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:33:52 crc kubenswrapper[4698]: I0127 14:33:52.999411 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.052046 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.078304 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.111145 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.133603 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.194213 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.197285 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.324621 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.332213 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.334391 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.357183 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.426715 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.428938 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.503349 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.510034 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.525684 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.679490 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.688111 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.691552 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.739076 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.782353 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.821840 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.874470 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:33:53 crc kubenswrapper[4698]: I0127 14:33:53.942792 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.033567 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.054497 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.117973 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.251561 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.316986 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.430199 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.512357 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.532480 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.544376 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.574067 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.675909 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.799790 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.802861 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.861794 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.878451 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.888833 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.915845 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:33:54 crc kubenswrapper[4698]: I0127 14:33:54.978901 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.029469 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.065867 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.110810 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.124453 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.224548 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.346089 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.348981 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.417207 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.524824 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.555432 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.566829 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.616428 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.616479 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.624649 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.676671 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.698959 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.792555 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.823527 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.828429 4698 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.828677 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08" gracePeriod=5 Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.881890 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.888400 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.907728 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:33:55 crc kubenswrapper[4698]: I0127 14:33:55.974454 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.239930 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.248199 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.330504 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.387023 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.400435 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.575728 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.603039 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.805252 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.819804 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.922549 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.930566 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:33:56 crc kubenswrapper[4698]: I0127 14:33:56.968311 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.023791 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.041623 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.232577 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.247306 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.272618 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.292167 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.378221 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.432054 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.518373 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.797870 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.801750 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.919984 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.975559 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:33:57 crc kubenswrapper[4698]: I0127 14:33:57.993401 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.151350 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.161740 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.237495 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.242670 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.343478 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.343774 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.540092 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.573780 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.688443 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.702480 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.772502 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.797274 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:33:58 crc kubenswrapper[4698]: I0127 14:33:58.982665 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.068264 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.099149 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.146954 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.239057 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.290566 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.615194 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.642161 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.756286 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.790551 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.802822 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.861122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.870346 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:33:59 crc kubenswrapper[4698]: I0127 14:33:59.900552 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:34:00 crc kubenswrapper[4698]: I0127 14:34:00.153331 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:34:00 crc kubenswrapper[4698]: I0127 14:34:00.436823 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:34:00 crc kubenswrapper[4698]: I0127 14:34:00.757227 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.264991 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.401302 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.401382 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527074 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527188 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527213 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527275 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527274 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527315 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527333 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527461 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527684 4698 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527703 4698 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527713 4698 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.527722 4698 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.533901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.550306 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.550355 4698 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08" exitCode=137 Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.550396 4698 scope.go:117] "RemoveContainer" containerID="ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.550473 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.577208 4698 scope.go:117] "RemoveContainer" containerID="ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08" Jan 27 14:34:01 crc kubenswrapper[4698]: E0127 14:34:01.577937 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08\": container with ID starting with ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08 not found: ID does not exist" containerID="ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.578001 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08"} err="failed to get container status \"ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08\": rpc error: code = NotFound desc = could not find container \"ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08\": container with ID starting with ed30d11309c117a633eef0e91b719a43d1f016174846fbff9eca59be72dbab08 not found: ID does not exist" Jan 27 14:34:01 crc kubenswrapper[4698]: I0127 14:34:01.628572 4698 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:02 crc kubenswrapper[4698]: I0127 14:34:02.227931 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:34:02 crc kubenswrapper[4698]: I0127 14:34:02.998501 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 14:34:02 crc kubenswrapper[4698]: I0127 14:34:02.998791 4698 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 14:34:03 crc kubenswrapper[4698]: I0127 14:34:03.008403 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:34:03 crc kubenswrapper[4698]: I0127 14:34:03.008457 4698 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e39fa1c2-2982-42d7-b17d-1750d817ae44" Jan 27 14:34:03 crc kubenswrapper[4698]: I0127 14:34:03.011610 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:34:03 crc kubenswrapper[4698]: I0127 14:34:03.011663 4698 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e39fa1c2-2982-42d7-b17d-1750d817ae44" Jan 27 14:34:04 crc kubenswrapper[4698]: I0127 14:34:04.803961 4698 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 14:34:14 crc kubenswrapper[4698]: I0127 14:34:14.629012 4698 generic.go:334] "Generic (PLEG): container finished" podID="537d845d-d98b-4168-b87b-d0231602f4e9" containerID="fd915629ed1ef2e612e6bedd38d370cb4c8f28262640ac77f7b59329afb0378b" exitCode=0 Jan 27 14:34:14 crc kubenswrapper[4698]: I0127 14:34:14.629121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerDied","Data":"fd915629ed1ef2e612e6bedd38d370cb4c8f28262640ac77f7b59329afb0378b"} Jan 27 14:34:14 crc kubenswrapper[4698]: I0127 14:34:14.629913 4698 scope.go:117] "RemoveContainer" containerID="fd915629ed1ef2e612e6bedd38d370cb4c8f28262640ac77f7b59329afb0378b" Jan 27 14:34:15 crc kubenswrapper[4698]: I0127 14:34:15.639053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerStarted","Data":"faa41bdd1ba721c1ff268715721f5c8668d8826924064ffb3ba0d483a3334beb"} Jan 27 14:34:15 crc kubenswrapper[4698]: I0127 14:34:15.640059 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:34:15 crc kubenswrapper[4698]: I0127 14:34:15.643478 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:35:11 crc kubenswrapper[4698]: I0127 14:35:11.695181 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:35:11 crc kubenswrapper[4698]: I0127 14:35:11.696285 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkhfh" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="registry-server" containerID="cri-o://e136dd8f1a1411189e758324102053c3ca2297943085efb3869e9ffe195bd70f" gracePeriod=2 Jan 27 14:35:11 crc kubenswrapper[4698]: I0127 14:35:11.950076 4698 generic.go:334] "Generic (PLEG): container finished" podID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerID="e136dd8f1a1411189e758324102053c3ca2297943085efb3869e9ffe195bd70f" exitCode=0 Jan 27 14:35:11 crc kubenswrapper[4698]: I0127 14:35:11.950131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerDied","Data":"e136dd8f1a1411189e758324102053c3ca2297943085efb3869e9ffe195bd70f"} Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.056721 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.163327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content\") pod \"6947cad8-3436-4bc3-8bda-c2c1a4972402\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.163408 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities\") pod \"6947cad8-3436-4bc3-8bda-c2c1a4972402\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.163500 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26hbd\" (UniqueName: \"kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd\") pod \"6947cad8-3436-4bc3-8bda-c2c1a4972402\" (UID: \"6947cad8-3436-4bc3-8bda-c2c1a4972402\") " Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.165449 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities" (OuterVolumeSpecName: "utilities") pod "6947cad8-3436-4bc3-8bda-c2c1a4972402" (UID: "6947cad8-3436-4bc3-8bda-c2c1a4972402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.171105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd" (OuterVolumeSpecName: "kube-api-access-26hbd") pod "6947cad8-3436-4bc3-8bda-c2c1a4972402" (UID: "6947cad8-3436-4bc3-8bda-c2c1a4972402"). InnerVolumeSpecName "kube-api-access-26hbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.265422 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.265462 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26hbd\" (UniqueName: \"kubernetes.io/projected/6947cad8-3436-4bc3-8bda-c2c1a4972402-kube-api-access-26hbd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.291171 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6947cad8-3436-4bc3-8bda-c2c1a4972402" (UID: "6947cad8-3436-4bc3-8bda-c2c1a4972402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.366862 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6947cad8-3436-4bc3-8bda-c2c1a4972402-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.958473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkhfh" event={"ID":"6947cad8-3436-4bc3-8bda-c2c1a4972402","Type":"ContainerDied","Data":"3fd1bd61792ae8f77e9d0314652d0aa159e9fe324a87fa2b68c3ea00a37810bf"} Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.958534 4698 scope.go:117] "RemoveContainer" containerID="e136dd8f1a1411189e758324102053c3ca2297943085efb3869e9ffe195bd70f" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.958700 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkhfh" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.986762 4698 scope.go:117] "RemoveContainer" containerID="b7f7dd314d3d2d41d458de4261adf0eac8fa6f9ce2b55c097411c7d0b7e11066" Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.998470 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:35:12 crc kubenswrapper[4698]: I0127 14:35:12.998517 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkhfh"] Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.030160 4698 scope.go:117] "RemoveContainer" containerID="5b03cd7a1f76fa594d0ac34fddbd7c9c367e4e775e079775e432568a47721756" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.494922 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.495288 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdp6k" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="registry-server" containerID="cri-o://2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448" gracePeriod=2 Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.871084 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.969148 4698 generic.go:334] "Generic (PLEG): container finished" podID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerID="2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448" exitCode=0 Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.969212 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerDied","Data":"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448"} Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.969286 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdp6k" event={"ID":"882a9575-2eeb-4f8e-812c-2419b499a07e","Type":"ContainerDied","Data":"0f0e681c25b309ae8416547b922f312d209da69dc82d95f1000e49600f7278ca"} Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.969279 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdp6k" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.969416 4698 scope.go:117] "RemoveContainer" containerID="2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.984374 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtfm9\" (UniqueName: \"kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9\") pod \"882a9575-2eeb-4f8e-812c-2419b499a07e\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.984767 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content\") pod \"882a9575-2eeb-4f8e-812c-2419b499a07e\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.984802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities\") pod \"882a9575-2eeb-4f8e-812c-2419b499a07e\" (UID: \"882a9575-2eeb-4f8e-812c-2419b499a07e\") " Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.985839 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities" (OuterVolumeSpecName: "utilities") pod "882a9575-2eeb-4f8e-812c-2419b499a07e" (UID: "882a9575-2eeb-4f8e-812c-2419b499a07e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.991486 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9" (OuterVolumeSpecName: "kube-api-access-qtfm9") pod "882a9575-2eeb-4f8e-812c-2419b499a07e" (UID: "882a9575-2eeb-4f8e-812c-2419b499a07e"). InnerVolumeSpecName "kube-api-access-qtfm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:13 crc kubenswrapper[4698]: I0127 14:35:13.991997 4698 scope.go:117] "RemoveContainer" containerID="25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.031676 4698 scope.go:117] "RemoveContainer" containerID="e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.040302 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "882a9575-2eeb-4f8e-812c-2419b499a07e" (UID: "882a9575-2eeb-4f8e-812c-2419b499a07e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.051969 4698 scope.go:117] "RemoveContainer" containerID="2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448" Jan 27 14:35:14 crc kubenswrapper[4698]: E0127 14:35:14.052798 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448\": container with ID starting with 2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448 not found: ID does not exist" containerID="2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.052839 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448"} err="failed to get container status \"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448\": rpc error: code = NotFound desc = could not find container \"2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448\": container with ID starting with 2bce0586725f7f8170882b06df948687a132d0d03cae3e7dc2944c8b7049f448 not found: ID does not exist" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.052865 4698 scope.go:117] "RemoveContainer" containerID="25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced" Jan 27 14:35:14 crc kubenswrapper[4698]: E0127 14:35:14.053320 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced\": container with ID starting with 25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced not found: ID does not exist" containerID="25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.053345 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced"} err="failed to get container status \"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced\": rpc error: code = NotFound desc = could not find container \"25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced\": container with ID starting with 25f762334ef9ce469e9d104264b6dcc035ebe8e76a1dd1700288aa7fe7e2bced not found: ID does not exist" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.053362 4698 scope.go:117] "RemoveContainer" containerID="e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2" Jan 27 14:35:14 crc kubenswrapper[4698]: E0127 14:35:14.054010 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2\": container with ID starting with e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2 not found: ID does not exist" containerID="e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.054090 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2"} err="failed to get container status \"e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2\": rpc error: code = NotFound desc = could not find container \"e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2\": container with ID starting with e7d38ad003696304b3629ec8a61986dfcf73fa866b4b74bd03e34a61722bd3d2 not found: ID does not exist" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.086692 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtfm9\" (UniqueName: \"kubernetes.io/projected/882a9575-2eeb-4f8e-812c-2419b499a07e-kube-api-access-qtfm9\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.086754 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.086771 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882a9575-2eeb-4f8e-812c-2419b499a07e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.304480 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:35:14 crc kubenswrapper[4698]: I0127 14:35:14.309192 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdp6k"] Jan 27 14:35:15 crc kubenswrapper[4698]: I0127 14:35:15.000172 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" path="/var/lib/kubelet/pods/6947cad8-3436-4bc3-8bda-c2c1a4972402/volumes" Jan 27 14:35:15 crc kubenswrapper[4698]: I0127 14:35:15.000902 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" path="/var/lib/kubelet/pods/882a9575-2eeb-4f8e-812c-2419b499a07e/volumes" Jan 27 14:35:27 crc kubenswrapper[4698]: I0127 14:35:27.452252 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:35:27 crc kubenswrapper[4698]: I0127 14:35:27.452853 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:35:28 crc kubenswrapper[4698]: I0127 14:35:28.089217 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:35:33 crc kubenswrapper[4698]: I0127 14:35:33.720158 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:35:33 crc kubenswrapper[4698]: I0127 14:35:33.720954 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" podUID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" containerName="controller-manager" containerID="cri-o://4edc3a0f340cf21d0bd2016836059e07ed3ce95eee61b526756d836d954243d0" gracePeriod=30 Jan 27 14:35:33 crc kubenswrapper[4698]: I0127 14:35:33.820261 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:35:33 crc kubenswrapper[4698]: I0127 14:35:33.820529 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" podUID="b7211c73-92a4-463e-8d9b-25638f41a7dd" containerName="route-controller-manager" containerID="cri-o://e74cc166a6640c1aad3b28a90e6b3cbde0f5c861ea9328ba115ea1724a6f0f37" gracePeriod=30 Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.076285 4698 generic.go:334] "Generic (PLEG): container finished" podID="b7211c73-92a4-463e-8d9b-25638f41a7dd" containerID="e74cc166a6640c1aad3b28a90e6b3cbde0f5c861ea9328ba115ea1724a6f0f37" exitCode=0 Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.076369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" event={"ID":"b7211c73-92a4-463e-8d9b-25638f41a7dd","Type":"ContainerDied","Data":"e74cc166a6640c1aad3b28a90e6b3cbde0f5c861ea9328ba115ea1724a6f0f37"} Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.077962 4698 generic.go:334] "Generic (PLEG): container finished" podID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" containerID="4edc3a0f340cf21d0bd2016836059e07ed3ce95eee61b526756d836d954243d0" exitCode=0 Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.077989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" event={"ID":"ba1000e3-5241-4bdd-91f4-dddc45fc0a07","Type":"ContainerDied","Data":"4edc3a0f340cf21d0bd2016836059e07ed3ce95eee61b526756d836d954243d0"} Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.078007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" event={"ID":"ba1000e3-5241-4bdd-91f4-dddc45fc0a07","Type":"ContainerDied","Data":"7e2e41951d57973a89f5fba9a30a7d1da911dc03ae6744a73464892b94695281"} Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.078020 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2e41951d57973a89f5fba9a30a7d1da911dc03ae6744a73464892b94695281" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.107665 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.195431 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.234703 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q8g5\" (UniqueName: \"kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5\") pod \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.234777 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles\") pod \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.234802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config\") pod \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.234859 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert\") pod \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.234888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca\") pod \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\" (UID: \"ba1000e3-5241-4bdd-91f4-dddc45fc0a07\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ba1000e3-5241-4bdd-91f4-dddc45fc0a07" (UID: "ba1000e3-5241-4bdd-91f4-dddc45fc0a07"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235488 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba1000e3-5241-4bdd-91f4-dddc45fc0a07" (UID: "ba1000e3-5241-4bdd-91f4-dddc45fc0a07"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235583 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config" (OuterVolumeSpecName: "config") pod "ba1000e3-5241-4bdd-91f4-dddc45fc0a07" (UID: "ba1000e3-5241-4bdd-91f4-dddc45fc0a07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235812 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235830 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.235849 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.240898 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5" (OuterVolumeSpecName: "kube-api-access-8q8g5") pod "ba1000e3-5241-4bdd-91f4-dddc45fc0a07" (UID: "ba1000e3-5241-4bdd-91f4-dddc45fc0a07"). InnerVolumeSpecName "kube-api-access-8q8g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.243048 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba1000e3-5241-4bdd-91f4-dddc45fc0a07" (UID: "ba1000e3-5241-4bdd-91f4-dddc45fc0a07"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.336910 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca\") pod \"b7211c73-92a4-463e-8d9b-25638f41a7dd\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.336992 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km6d9\" (UniqueName: \"kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9\") pod \"b7211c73-92a4-463e-8d9b-25638f41a7dd\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.337031 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert\") pod \"b7211c73-92a4-463e-8d9b-25638f41a7dd\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.337073 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config\") pod \"b7211c73-92a4-463e-8d9b-25638f41a7dd\" (UID: \"b7211c73-92a4-463e-8d9b-25638f41a7dd\") " Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.337417 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.337439 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q8g5\" (UniqueName: \"kubernetes.io/projected/ba1000e3-5241-4bdd-91f4-dddc45fc0a07-kube-api-access-8q8g5\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.338500 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config" (OuterVolumeSpecName: "config") pod "b7211c73-92a4-463e-8d9b-25638f41a7dd" (UID: "b7211c73-92a4-463e-8d9b-25638f41a7dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.338516 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7211c73-92a4-463e-8d9b-25638f41a7dd" (UID: "b7211c73-92a4-463e-8d9b-25638f41a7dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.343686 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7211c73-92a4-463e-8d9b-25638f41a7dd" (UID: "b7211c73-92a4-463e-8d9b-25638f41a7dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.343990 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9" (OuterVolumeSpecName: "kube-api-access-km6d9") pod "b7211c73-92a4-463e-8d9b-25638f41a7dd" (UID: "b7211c73-92a4-463e-8d9b-25638f41a7dd"). InnerVolumeSpecName "kube-api-access-km6d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.438987 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km6d9\" (UniqueName: \"kubernetes.io/projected/b7211c73-92a4-463e-8d9b-25638f41a7dd-kube-api-access-km6d9\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.439024 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7211c73-92a4-463e-8d9b-25638f41a7dd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.439036 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:34 crc kubenswrapper[4698]: I0127 14:35:34.439045 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7211c73-92a4-463e-8d9b-25638f41a7dd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.084546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" event={"ID":"b7211c73-92a4-463e-8d9b-25638f41a7dd","Type":"ContainerDied","Data":"a0f9442e6bb5e38f722b57182ecbea33f9a4b95ed02a4b6d65684fdcecb5ef81"} Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.084607 4698 scope.go:117] "RemoveContainer" containerID="e74cc166a6640c1aad3b28a90e6b3cbde0f5c861ea9328ba115ea1724a6f0f37" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.084563 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.084812 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d88d59755-5lcjd" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.108726 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.112178 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589f878698-vjwmk"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.119278 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.121968 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d88d59755-5lcjd"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739079 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739664 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739680 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739693 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739699 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739707 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739714 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739722 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739729 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739735 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" containerName="controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739740 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" containerName="controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739748 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739754 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739765 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739771 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739780 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739787 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739798 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739803 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="extract-content" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739811 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739817 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739824 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739831 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="extract-utilities" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739838 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" containerName="installer" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739843 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" containerName="installer" Jan 27 14:35:35 crc kubenswrapper[4698]: E0127 14:35:35.739850 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7211c73-92a4-463e-8d9b-25638f41a7dd" containerName="route-controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739857 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7211c73-92a4-463e-8d9b-25638f41a7dd" containerName="route-controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739940 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" containerName="controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739948 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6947cad8-3436-4bc3-8bda-c2c1a4972402" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739956 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="530c77f2-b81c-4835-989c-57b155f04d2c" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739964 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="974a0417-d9d3-48b2-931d-c2c3830481db" containerName="installer" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739970 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7211c73-92a4-463e-8d9b-25638f41a7dd" containerName="route-controller-manager" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739978 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="882a9575-2eeb-4f8e-812c-2419b499a07e" containerName="registry-server" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.739988 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.740372 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.741982 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.742997 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.743051 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.743236 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.743803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.745097 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.746526 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.746542 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.746782 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.746928 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.747255 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.747461 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.747956 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.747984 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.751912 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.754530 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.756553 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.853831 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.853892 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.853931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.853953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2dk\" (UniqueName: \"kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.854147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.854275 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.854397 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.854460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.854576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q866d\" (UniqueName: \"kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955604 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2dk\" (UniqueName: \"kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955798 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955886 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q866d\" (UniqueName: \"kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.955954 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.956741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.957513 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.957555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.957677 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.958131 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.962015 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.963562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:35 crc kubenswrapper[4698]: I0127 14:35:35.980165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2dk\" (UniqueName: \"kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk\") pod \"controller-manager-5b58bbbb56-sgbbt\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.002133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q866d\" (UniqueName: \"kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d\") pod \"route-controller-manager-7b95856b87-jsw9p\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.059546 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.072383 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.232220 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.275051 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:36 crc kubenswrapper[4698]: W0127 14:35:36.278476 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97785ecd_6072_42fe_826e_77a8766bcb2d.slice/crio-08afa8d16502233900d035bde272a295c91e91f423a3ae878e7e851706c42f07 WatchSource:0}: Error finding container 08afa8d16502233900d035bde272a295c91e91f423a3ae878e7e851706c42f07: Status 404 returned error can't find the container with id 08afa8d16502233900d035bde272a295c91e91f423a3ae878e7e851706c42f07 Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.998069 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7211c73-92a4-463e-8d9b-25638f41a7dd" path="/var/lib/kubelet/pods/b7211c73-92a4-463e-8d9b-25638f41a7dd/volumes" Jan 27 14:35:36 crc kubenswrapper[4698]: I0127 14:35:36.999122 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1000e3-5241-4bdd-91f4-dddc45fc0a07" path="/var/lib/kubelet/pods/ba1000e3-5241-4bdd-91f4-dddc45fc0a07/volumes" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.097019 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" event={"ID":"97785ecd-6072-42fe-826e-77a8766bcb2d","Type":"ContainerStarted","Data":"89c27c8f0a3d1a49c24f1168f83e40e31395aabd4b90dc530a21d9b5570696e0"} Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.097070 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" event={"ID":"97785ecd-6072-42fe-826e-77a8766bcb2d","Type":"ContainerStarted","Data":"08afa8d16502233900d035bde272a295c91e91f423a3ae878e7e851706c42f07"} Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.098159 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.099752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" event={"ID":"e167173f-5a61-4cd0-88cb-d4b632f9c5ce","Type":"ContainerStarted","Data":"7c575055a885fa51d8e8f01bdba77bf1711da8914725f30acc9d2ff30ddc4a83"} Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.099782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" event={"ID":"e167173f-5a61-4cd0-88cb-d4b632f9c5ce","Type":"ContainerStarted","Data":"52db3a3c736b09dac54336e66aef64a07ef483c2f7ceb4cb933018e5442db6cc"} Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.099998 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.103397 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.104162 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.118286 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" podStartSLOduration=4.118268657 podStartE2EDuration="4.118268657s" podCreationTimestamp="2026-01-27 14:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:37.113340946 +0000 UTC m=+392.790118411" watchObservedRunningTime="2026-01-27 14:35:37.118268657 +0000 UTC m=+392.795046122" Jan 27 14:35:37 crc kubenswrapper[4698]: I0127 14:35:37.148813 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" podStartSLOduration=4.148794042 podStartE2EDuration="4.148794042s" podCreationTimestamp="2026-01-27 14:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:37.145264438 +0000 UTC m=+392.822041933" watchObservedRunningTime="2026-01-27 14:35:37.148794042 +0000 UTC m=+392.825571517" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.115986 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" containerID="cri-o://8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2" gracePeriod=15 Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.583337 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.618020 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5bb89855bb-7vwgz"] Jan 27 14:35:53 crc kubenswrapper[4698]: E0127 14:35:53.618227 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.618239 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.618333 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a699460-e5aa-401d-b2c4-003604099924" containerName="oauth-openshift" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.618742 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.626757 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bb89855bb-7vwgz"] Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676055 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676178 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxfx\" (UniqueName: \"kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676206 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676232 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676328 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676428 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676497 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676544 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676567 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.676589 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection\") pod \"7a699460-e5aa-401d-b2c4-003604099924\" (UID: \"7a699460-e5aa-401d-b2c4-003604099924\") " Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.677978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.678527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.678547 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.678880 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.678920 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.682593 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.682971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.683304 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx" (OuterVolumeSpecName: "kube-api-access-6zxfx") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "kube-api-access-6zxfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.683385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.683542 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.685257 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.687894 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.689150 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.693006 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7a699460-e5aa-401d-b2c4-003604099924" (UID: "7a699460-e5aa-401d-b2c4-003604099924"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.735385 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.735671 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" podUID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" containerName="controller-manager" containerID="cri-o://7c575055a885fa51d8e8f01bdba77bf1711da8914725f30acc9d2ff30ddc4a83" gracePeriod=30 Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.766154 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.766339 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" podUID="97785ecd-6072-42fe-826e-77a8766bcb2d" containerName="route-controller-manager" containerID="cri-o://89c27c8f0a3d1a49c24f1168f83e40e31395aabd4b90dc530a21d9b5570696e0" gracePeriod=30 Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778433 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-error\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778514 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-session\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778535 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-policies\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-login\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778599 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778758 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778854 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drl7x\" (UniqueName: \"kubernetes.io/projected/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-kube-api-access-drl7x\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.778993 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-dir\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779073 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779086 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zxfx\" (UniqueName: \"kubernetes.io/projected/7a699460-e5aa-401d-b2c4-003604099924-kube-api-access-6zxfx\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779120 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a699460-e5aa-401d-b2c4-003604099924-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779129 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779137 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779146 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779156 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779164 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779175 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779184 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779193 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779201 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779210 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.779219 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7a699460-e5aa-401d-b2c4-003604099924-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.879963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drl7x\" (UniqueName: \"kubernetes.io/projected/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-kube-api-access-drl7x\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880103 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-dir\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-error\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-session\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-policies\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-login\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880252 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.880267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.881362 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.882100 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.882255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-policies\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.882320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-audit-dir\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.882879 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.884775 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-session\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.885250 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.885753 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.885950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-login\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.886306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.886448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.889133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.890566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-v4-0-config-user-template-error\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.900235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drl7x\" (UniqueName: \"kubernetes.io/projected/d454a95f-0b42-4ae8-8f93-26fa6b9166b6-kube-api-access-drl7x\") pod \"oauth-openshift-5bb89855bb-7vwgz\" (UID: \"d454a95f-0b42-4ae8-8f93-26fa6b9166b6\") " pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:53 crc kubenswrapper[4698]: I0127 14:35:53.932867 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.186036 4698 generic.go:334] "Generic (PLEG): container finished" podID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" containerID="7c575055a885fa51d8e8f01bdba77bf1711da8914725f30acc9d2ff30ddc4a83" exitCode=0 Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.186083 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" event={"ID":"e167173f-5a61-4cd0-88cb-d4b632f9c5ce","Type":"ContainerDied","Data":"7c575055a885fa51d8e8f01bdba77bf1711da8914725f30acc9d2ff30ddc4a83"} Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.188529 4698 generic.go:334] "Generic (PLEG): container finished" podID="97785ecd-6072-42fe-826e-77a8766bcb2d" containerID="89c27c8f0a3d1a49c24f1168f83e40e31395aabd4b90dc530a21d9b5570696e0" exitCode=0 Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.188717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" event={"ID":"97785ecd-6072-42fe-826e-77a8766bcb2d","Type":"ContainerDied","Data":"89c27c8f0a3d1a49c24f1168f83e40e31395aabd4b90dc530a21d9b5570696e0"} Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.190003 4698 generic.go:334] "Generic (PLEG): container finished" podID="7a699460-e5aa-401d-b2c4-003604099924" containerID="8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2" exitCode=0 Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.190046 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" event={"ID":"7a699460-e5aa-401d-b2c4-003604099924","Type":"ContainerDied","Data":"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2"} Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.190076 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" event={"ID":"7a699460-e5aa-401d-b2c4-003604099924","Type":"ContainerDied","Data":"1fd4755c4a0ab20d0cd1d5db99b985ee96ab005a5ff4026c22647173f90cfd55"} Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.190096 4698 scope.go:117] "RemoveContainer" containerID="8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.190208 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7rj5" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.219619 4698 scope.go:117] "RemoveContainer" containerID="8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2" Jan 27 14:35:54 crc kubenswrapper[4698]: E0127 14:35:54.220149 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2\": container with ID starting with 8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2 not found: ID does not exist" containerID="8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.220181 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2"} err="failed to get container status \"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2\": rpc error: code = NotFound desc = could not find container \"8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2\": container with ID starting with 8c3c3a78cdf63a26bd61f5e6d1b218a4f221b84cbb9fe65bc04ccb1ea3958bb2 not found: ID does not exist" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.241965 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.256299 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7rj5"] Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.351920 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.355211 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.426544 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bb89855bb-7vwgz"] Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.486813 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert\") pod \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.486923 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config\") pod \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487008 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz2dk\" (UniqueName: \"kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk\") pod \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487047 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca\") pod \"97785ecd-6072-42fe-826e-77a8766bcb2d\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487084 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles\") pod \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487113 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q866d\" (UniqueName: \"kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d\") pod \"97785ecd-6072-42fe-826e-77a8766bcb2d\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config\") pod \"97785ecd-6072-42fe-826e-77a8766bcb2d\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487190 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca\") pod \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\" (UID: \"e167173f-5a61-4cd0-88cb-d4b632f9c5ce\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.487217 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert\") pod \"97785ecd-6072-42fe-826e-77a8766bcb2d\" (UID: \"97785ecd-6072-42fe-826e-77a8766bcb2d\") " Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.488512 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "97785ecd-6072-42fe-826e-77a8766bcb2d" (UID: "97785ecd-6072-42fe-826e-77a8766bcb2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.488783 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config" (OuterVolumeSpecName: "config") pod "97785ecd-6072-42fe-826e-77a8766bcb2d" (UID: "97785ecd-6072-42fe-826e-77a8766bcb2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.489034 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e167173f-5a61-4cd0-88cb-d4b632f9c5ce" (UID: "e167173f-5a61-4cd0-88cb-d4b632f9c5ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.489051 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "e167173f-5a61-4cd0-88cb-d4b632f9c5ce" (UID: "e167173f-5a61-4cd0-88cb-d4b632f9c5ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.489761 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config" (OuterVolumeSpecName: "config") pod "e167173f-5a61-4cd0-88cb-d4b632f9c5ce" (UID: "e167173f-5a61-4cd0-88cb-d4b632f9c5ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.493815 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97785ecd-6072-42fe-826e-77a8766bcb2d" (UID: "97785ecd-6072-42fe-826e-77a8766bcb2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.493933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d" (OuterVolumeSpecName: "kube-api-access-q866d") pod "97785ecd-6072-42fe-826e-77a8766bcb2d" (UID: "97785ecd-6072-42fe-826e-77a8766bcb2d"). InnerVolumeSpecName "kube-api-access-q866d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.494190 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk" (OuterVolumeSpecName: "kube-api-access-hz2dk") pod "e167173f-5a61-4cd0-88cb-d4b632f9c5ce" (UID: "e167173f-5a61-4cd0-88cb-d4b632f9c5ce"). InnerVolumeSpecName "kube-api-access-hz2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.494389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e167173f-5a61-4cd0-88cb-d4b632f9c5ce" (UID: "e167173f-5a61-4cd0-88cb-d4b632f9c5ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588504 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q866d\" (UniqueName: \"kubernetes.io/projected/97785ecd-6072-42fe-826e-77a8766bcb2d-kube-api-access-q866d\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588853 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588871 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588886 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97785ecd-6072-42fe-826e-77a8766bcb2d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588899 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588910 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588922 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz2dk\" (UniqueName: \"kubernetes.io/projected/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-kube-api-access-hz2dk\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588965 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97785ecd-6072-42fe-826e-77a8766bcb2d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:54 crc kubenswrapper[4698]: I0127 14:35:54.588980 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e167173f-5a61-4cd0-88cb-d4b632f9c5ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.001328 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a699460-e5aa-401d-b2c4-003604099924" path="/var/lib/kubelet/pods/7a699460-e5aa-401d-b2c4-003604099924/volumes" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.194762 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" event={"ID":"97785ecd-6072-42fe-826e-77a8766bcb2d","Type":"ContainerDied","Data":"08afa8d16502233900d035bde272a295c91e91f423a3ae878e7e851706c42f07"} Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.194823 4698 scope.go:117] "RemoveContainer" containerID="89c27c8f0a3d1a49c24f1168f83e40e31395aabd4b90dc530a21d9b5570696e0" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.194780 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.199314 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" event={"ID":"d454a95f-0b42-4ae8-8f93-26fa6b9166b6","Type":"ContainerStarted","Data":"1d7fd4bd74d992e05a44064c12c48e43d83480282fdcfff8ecd71b72bbb54c2a"} Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.199351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" event={"ID":"d454a95f-0b42-4ae8-8f93-26fa6b9166b6","Type":"ContainerStarted","Data":"22187ab96ed75a8156f2d66e3912286769c1970cba297ac1473e54fbd17f2567"} Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.199717 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.200940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" event={"ID":"e167173f-5a61-4cd0-88cb-d4b632f9c5ce","Type":"ContainerDied","Data":"52db3a3c736b09dac54336e66aef64a07ef483c2f7ceb4cb933018e5442db6cc"} Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.201021 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.205438 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.215282 4698 scope.go:117] "RemoveContainer" containerID="7c575055a885fa51d8e8f01bdba77bf1711da8914725f30acc9d2ff30ddc4a83" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.217415 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.221173 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b95856b87-jsw9p"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.243081 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5bb89855bb-7vwgz" podStartSLOduration=27.243055127 podStartE2EDuration="27.243055127s" podCreationTimestamp="2026-01-27 14:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:55.234089488 +0000 UTC m=+410.910866963" watchObservedRunningTime="2026-01-27 14:35:55.243055127 +0000 UTC m=+410.919832602" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.252465 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.260688 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b58bbbb56-sgbbt"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.758634 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9cd865c7-64tsb"] Jan 27 14:35:55 crc kubenswrapper[4698]: E0127 14:35:55.759179 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97785ecd-6072-42fe-826e-77a8766bcb2d" containerName="route-controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.759216 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97785ecd-6072-42fe-826e-77a8766bcb2d" containerName="route-controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: E0127 14:35:55.759437 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" containerName="controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.759450 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" containerName="controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.759613 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="97785ecd-6072-42fe-826e-77a8766bcb2d" containerName="route-controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.759653 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" containerName="controller-manager" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.760176 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.762725 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.763257 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.763541 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.764075 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.765967 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.766371 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.766559 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.767439 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.774855 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.776328 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9cd865c7-64tsb"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.777951 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.778464 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.778683 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.778830 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.779026 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.779165 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.779944 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8"] Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906439 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-config\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906497 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-serving-cert\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-client-ca\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906551 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de0d63e-2d53-4109-8cfc-dce3d728603c-serving-cert\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906591 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-proxy-ca-bundles\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906785 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-config\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.906840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/1de0d63e-2d53-4109-8cfc-dce3d728603c-kube-api-access-cl6km\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.907035 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-client-ca\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:55 crc kubenswrapper[4698]: I0127 14:35:55.907169 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmg2p\" (UniqueName: \"kubernetes.io/projected/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-kube-api-access-tmg2p\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-proxy-ca-bundles\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008597 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-config\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/1de0d63e-2d53-4109-8cfc-dce3d728603c-kube-api-access-cl6km\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-client-ca\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmg2p\" (UniqueName: \"kubernetes.io/projected/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-kube-api-access-tmg2p\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-config\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008948 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-serving-cert\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-client-ca\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.008985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de0d63e-2d53-4109-8cfc-dce3d728603c-serving-cert\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.010022 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-client-ca\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.010048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-proxy-ca-bundles\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.010722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de0d63e-2d53-4109-8cfc-dce3d728603c-config\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.011495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-config\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.012137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-client-ca\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.015232 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de0d63e-2d53-4109-8cfc-dce3d728603c-serving-cert\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.016445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-serving-cert\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.030335 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/1de0d63e-2d53-4109-8cfc-dce3d728603c-kube-api-access-cl6km\") pod \"controller-manager-6c9cd865c7-64tsb\" (UID: \"1de0d63e-2d53-4109-8cfc-dce3d728603c\") " pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.030587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmg2p\" (UniqueName: \"kubernetes.io/projected/e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9-kube-api-access-tmg2p\") pod \"route-controller-manager-7dc6864f84-62xh8\" (UID: \"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9\") " pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.089495 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.097690 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.300047 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9cd865c7-64tsb"] Jan 27 14:35:56 crc kubenswrapper[4698]: I0127 14:35:56.342247 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8"] Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.000427 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97785ecd-6072-42fe-826e-77a8766bcb2d" path="/var/lib/kubelet/pods/97785ecd-6072-42fe-826e-77a8766bcb2d/volumes" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.001305 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e167173f-5a61-4cd0-88cb-d4b632f9c5ce" path="/var/lib/kubelet/pods/e167173f-5a61-4cd0-88cb-d4b632f9c5ce/volumes" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.216509 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" event={"ID":"1de0d63e-2d53-4109-8cfc-dce3d728603c","Type":"ContainerStarted","Data":"e7a23c28ec7c44b6366ab45e7fa35ad5de738adfb25027ec4b5a64754500a738"} Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.216584 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" event={"ID":"1de0d63e-2d53-4109-8cfc-dce3d728603c","Type":"ContainerStarted","Data":"14beebea168dd74c64388f78daeaaf62491d0ff780837f5d7a139074093bf74f"} Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.216993 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.218553 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" event={"ID":"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9","Type":"ContainerStarted","Data":"2c8858ab1a5f071346cfbf0f82099ce185ddabca97e78972196dbf5dc352cdc0"} Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.218598 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" event={"ID":"e0ce10c7-ec3a-4efa-ab6e-097be5ca26b9","Type":"ContainerStarted","Data":"5036ace32c2d1cabea35c10c6ccb5a8153d6b18b58c7cbee4af5ed8a09ef03fb"} Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.221880 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.237915 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c9cd865c7-64tsb" podStartSLOduration=4.237887207 podStartE2EDuration="4.237887207s" podCreationTimestamp="2026-01-27 14:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:57.236448479 +0000 UTC m=+412.913225944" watchObservedRunningTime="2026-01-27 14:35:57.237887207 +0000 UTC m=+412.914664672" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.279239 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" podStartSLOduration=4.279205231 podStartE2EDuration="4.279205231s" podCreationTimestamp="2026-01-27 14:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:57.271140045 +0000 UTC m=+412.947917510" watchObservedRunningTime="2026-01-27 14:35:57.279205231 +0000 UTC m=+412.955982696" Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.452110 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:35:57 crc kubenswrapper[4698]: I0127 14:35:57.452515 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:35:58 crc kubenswrapper[4698]: I0127 14:35:58.224218 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:35:58 crc kubenswrapper[4698]: I0127 14:35:58.228752 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dc6864f84-62xh8" Jan 27 14:36:27 crc kubenswrapper[4698]: I0127 14:36:27.451748 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:36:27 crc kubenswrapper[4698]: I0127 14:36:27.452278 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:36:27 crc kubenswrapper[4698]: I0127 14:36:27.452329 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:36:27 crc kubenswrapper[4698]: I0127 14:36:27.452856 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:36:27 crc kubenswrapper[4698]: I0127 14:36:27.452900 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7" gracePeriod=600 Jan 27 14:36:28 crc kubenswrapper[4698]: I0127 14:36:28.406120 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7" exitCode=0 Jan 27 14:36:28 crc kubenswrapper[4698]: I0127 14:36:28.406213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7"} Jan 27 14:36:28 crc kubenswrapper[4698]: I0127 14:36:28.407100 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd"} Jan 27 14:36:28 crc kubenswrapper[4698]: I0127 14:36:28.407194 4698 scope.go:117] "RemoveContainer" containerID="c533539135e25b310541c9f0b12adcf526b4651a2f6272bce12ea9686dd39ac9" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.107280 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.108004 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dhlmg" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="registry-server" containerID="cri-o://5f4f2b8bfea6881493931b100114c2e33da7f225d3731b773088b4c892456f39" gracePeriod=30 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.128192 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.128588 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t9sp" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="registry-server" containerID="cri-o://eb8741b773b76750d824f71d2335a9dd8415008a2a1af0cc3be54d36ce6b66d8" gracePeriod=30 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.135367 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.135624 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" containerID="cri-o://faa41bdd1ba721c1ff268715721f5c8668d8826924064ffb3ba0d483a3334beb" gracePeriod=30 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.153834 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.154197 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxkvv" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="registry-server" containerID="cri-o://06d8412c4d5bd31f7f4979e3862a04a4e5bbc3414da496ebc93f1765890e7ef0" gracePeriod=30 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.169845 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.170144 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9m8xd" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="registry-server" containerID="cri-o://5beb2772304d366fb72d95e3813094d4b0581bc5fcafc053b5e547336d5c8bc3" gracePeriod=30 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.174253 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zkn8"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.175135 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.188308 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zkn8"] Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.268596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkpfc\" (UniqueName: \"kubernetes.io/projected/287c4642-565c-4085-a7e0-31be12d876fe-kube-api-access-jkpfc\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.269024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.269071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.369788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkpfc\" (UniqueName: \"kubernetes.io/projected/287c4642-565c-4085-a7e0-31be12d876fe-kube-api-access-jkpfc\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.369861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.369905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.371440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.378665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/287c4642-565c-4085-a7e0-31be12d876fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.391504 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkpfc\" (UniqueName: \"kubernetes.io/projected/287c4642-565c-4085-a7e0-31be12d876fe-kube-api-access-jkpfc\") pod \"marketplace-operator-79b997595-8zkn8\" (UID: \"287c4642-565c-4085-a7e0-31be12d876fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.444819 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerID="5f4f2b8bfea6881493931b100114c2e33da7f225d3731b773088b4c892456f39" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.444904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerDied","Data":"5f4f2b8bfea6881493931b100114c2e33da7f225d3731b773088b4c892456f39"} Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.447050 4698 generic.go:334] "Generic (PLEG): container finished" podID="537d845d-d98b-4168-b87b-d0231602f4e9" containerID="faa41bdd1ba721c1ff268715721f5c8668d8826924064ffb3ba0d483a3334beb" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.447120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerDied","Data":"faa41bdd1ba721c1ff268715721f5c8668d8826924064ffb3ba0d483a3334beb"} Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.447191 4698 scope.go:117] "RemoveContainer" containerID="fd915629ed1ef2e612e6bedd38d370cb4c8f28262640ac77f7b59329afb0378b" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.450740 4698 generic.go:334] "Generic (PLEG): container finished" podID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerID="5beb2772304d366fb72d95e3813094d4b0581bc5fcafc053b5e547336d5c8bc3" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.450836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerDied","Data":"5beb2772304d366fb72d95e3813094d4b0581bc5fcafc053b5e547336d5c8bc3"} Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.453320 4698 generic.go:334] "Generic (PLEG): container finished" podID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerID="eb8741b773b76750d824f71d2335a9dd8415008a2a1af0cc3be54d36ce6b66d8" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.453399 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerDied","Data":"eb8741b773b76750d824f71d2335a9dd8415008a2a1af0cc3be54d36ce6b66d8"} Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.455454 4698 generic.go:334] "Generic (PLEG): container finished" podID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerID="06d8412c4d5bd31f7f4979e3862a04a4e5bbc3414da496ebc93f1765890e7ef0" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.455487 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerDied","Data":"06d8412c4d5bd31f7f4979e3862a04a4e5bbc3414da496ebc93f1765890e7ef0"} Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.633374 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.679135 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.722593 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.732808 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.745059 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.750003 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.776434 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content\") pod \"b5b88242-64d6-469e-a5e4-bc8bab680ded\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.777090 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2cm\" (UniqueName: \"kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm\") pod \"b5b88242-64d6-469e-a5e4-bc8bab680ded\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.777157 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities\") pod \"b5b88242-64d6-469e-a5e4-bc8bab680ded\" (UID: \"b5b88242-64d6-469e-a5e4-bc8bab680ded\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.781328 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities" (OuterVolumeSpecName: "utilities") pod "b5b88242-64d6-469e-a5e4-bc8bab680ded" (UID: "b5b88242-64d6-469e-a5e4-bc8bab680ded"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.782028 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.795252 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm" (OuterVolumeSpecName: "kube-api-access-mz2cm") pod "b5b88242-64d6-469e-a5e4-bc8bab680ded" (UID: "b5b88242-64d6-469e-a5e4-bc8bab680ded"). InnerVolumeSpecName "kube-api-access-mz2cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.836004 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b88242-64d6-469e-a5e4-bc8bab680ded" (UID: "b5b88242-64d6-469e-a5e4-bc8bab680ded"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883370 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content\") pod \"7f32c526-aea0-4758-a1ea-d0a694af3573\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883427 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca\") pod \"537d845d-d98b-4168-b87b-d0231602f4e9\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883457 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities\") pod \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmm9n\" (UniqueName: \"kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n\") pod \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883550 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities\") pod \"7f32c526-aea0-4758-a1ea-d0a694af3573\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883571 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities\") pod \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883667 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w98kp\" (UniqueName: \"kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp\") pod \"537d845d-d98b-4168-b87b-d0231602f4e9\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dd6q\" (UniqueName: \"kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q\") pod \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jq9l\" (UniqueName: \"kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l\") pod \"7f32c526-aea0-4758-a1ea-d0a694af3573\" (UID: \"7f32c526-aea0-4758-a1ea-d0a694af3573\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883740 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content\") pod \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\" (UID: \"d62f9471-7fdf-459f-8e3b-cadad2b6a542\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883766 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics\") pod \"537d845d-d98b-4168-b87b-d0231602f4e9\" (UID: \"537d845d-d98b-4168-b87b-d0231602f4e9\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883790 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content\") pod \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\" (UID: \"e47fa643-2257-49e0-8b1e-77f9d3165c0e\") " Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883983 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b88242-64d6-469e-a5e4-bc8bab680ded-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.883995 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz2cm\" (UniqueName: \"kubernetes.io/projected/b5b88242-64d6-469e-a5e4-bc8bab680ded-kube-api-access-mz2cm\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.886546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities" (OuterVolumeSpecName: "utilities") pod "d62f9471-7fdf-459f-8e3b-cadad2b6a542" (UID: "d62f9471-7fdf-459f-8e3b-cadad2b6a542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.888249 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l" (OuterVolumeSpecName: "kube-api-access-7jq9l") pod "7f32c526-aea0-4758-a1ea-d0a694af3573" (UID: "7f32c526-aea0-4758-a1ea-d0a694af3573"). InnerVolumeSpecName "kube-api-access-7jq9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.890773 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q" (OuterVolumeSpecName: "kube-api-access-6dd6q") pod "e47fa643-2257-49e0-8b1e-77f9d3165c0e" (UID: "e47fa643-2257-49e0-8b1e-77f9d3165c0e"). InnerVolumeSpecName "kube-api-access-6dd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.891583 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities" (OuterVolumeSpecName: "utilities") pod "e47fa643-2257-49e0-8b1e-77f9d3165c0e" (UID: "e47fa643-2257-49e0-8b1e-77f9d3165c0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.892851 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "537d845d-d98b-4168-b87b-d0231602f4e9" (UID: "537d845d-d98b-4168-b87b-d0231602f4e9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.896294 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n" (OuterVolumeSpecName: "kube-api-access-zmm9n") pod "d62f9471-7fdf-459f-8e3b-cadad2b6a542" (UID: "d62f9471-7fdf-459f-8e3b-cadad2b6a542"). InnerVolumeSpecName "kube-api-access-zmm9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.897266 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities" (OuterVolumeSpecName: "utilities") pod "7f32c526-aea0-4758-a1ea-d0a694af3573" (UID: "7f32c526-aea0-4758-a1ea-d0a694af3573"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.901372 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp" (OuterVolumeSpecName: "kube-api-access-w98kp") pod "537d845d-d98b-4168-b87b-d0231602f4e9" (UID: "537d845d-d98b-4168-b87b-d0231602f4e9"). InnerVolumeSpecName "kube-api-access-w98kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.902127 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "537d845d-d98b-4168-b87b-d0231602f4e9" (UID: "537d845d-d98b-4168-b87b-d0231602f4e9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.913309 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e47fa643-2257-49e0-8b1e-77f9d3165c0e" (UID: "e47fa643-2257-49e0-8b1e-77f9d3165c0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.952798 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f32c526-aea0-4758-a1ea-d0a694af3573" (UID: "7f32c526-aea0-4758-a1ea-d0a694af3573"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985332 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jq9l\" (UniqueName: \"kubernetes.io/projected/7f32c526-aea0-4758-a1ea-d0a694af3573-kube-api-access-7jq9l\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985372 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985389 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985402 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985414 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/537d845d-d98b-4168-b87b-d0231602f4e9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985425 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47fa643-2257-49e0-8b1e-77f9d3165c0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985438 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmm9n\" (UniqueName: \"kubernetes.io/projected/d62f9471-7fdf-459f-8e3b-cadad2b6a542-kube-api-access-zmm9n\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985449 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f32c526-aea0-4758-a1ea-d0a694af3573-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985459 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985469 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dd6q\" (UniqueName: \"kubernetes.io/projected/e47fa643-2257-49e0-8b1e-77f9d3165c0e-kube-api-access-6dd6q\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:32 crc kubenswrapper[4698]: I0127 14:36:32.985482 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w98kp\" (UniqueName: \"kubernetes.io/projected/537d845d-d98b-4168-b87b-d0231602f4e9-kube-api-access-w98kp\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.032335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d62f9471-7fdf-459f-8e3b-cadad2b6a542" (UID: "d62f9471-7fdf-459f-8e3b-cadad2b6a542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.087006 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62f9471-7fdf-459f-8e3b-cadad2b6a542-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.140909 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zkn8"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.461826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" event={"ID":"287c4642-565c-4085-a7e0-31be12d876fe","Type":"ContainerStarted","Data":"728243417eafb39530efe76a436dc2e713010c6dabefa11c7b8d6352f9e785bb"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.462199 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.462239 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" event={"ID":"287c4642-565c-4085-a7e0-31be12d876fe","Type":"ContainerStarted","Data":"ea483af2c1a50451de0f44a68d81146931ef41fe26c5ae67d157ad1da594ff92"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.463422 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8zkn8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.463484 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" podUID="287c4642-565c-4085-a7e0-31be12d876fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.464982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m8xd" event={"ID":"d62f9471-7fdf-459f-8e3b-cadad2b6a542","Type":"ContainerDied","Data":"9473241e6718a9e3c8675fd939845d44602dc29093c598900fa0553ed4dd04af"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.465033 4698 scope.go:117] "RemoveContainer" containerID="5beb2772304d366fb72d95e3813094d4b0581bc5fcafc053b5e547336d5c8bc3" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.465076 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m8xd" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.467242 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t9sp" event={"ID":"7f32c526-aea0-4758-a1ea-d0a694af3573","Type":"ContainerDied","Data":"bff9263da872d20e64c2650a7b19e969f30abbcecedfe96f76d5115af20afdaa"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.467381 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t9sp" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.470909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkvv" event={"ID":"e47fa643-2257-49e0-8b1e-77f9d3165c0e","Type":"ContainerDied","Data":"ca952f4728f499ff42a424a345f0de50948fa25564540f8579e898b99ea1e08a"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.470977 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkvv" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.474101 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhlmg" event={"ID":"b5b88242-64d6-469e-a5e4-bc8bab680ded","Type":"ContainerDied","Data":"196b0eef9b4068ba08576b5d014f6890d979d692f8f0e0c38df29bab6ad3b71b"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.474053 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhlmg" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.476484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" event={"ID":"537d845d-d98b-4168-b87b-d0231602f4e9","Type":"ContainerDied","Data":"e6fcac00a750d4b79ea5d32624f8dbc0f8e09cdda4314d3af7ff767400b882ea"} Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.476570 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwgll" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.482539 4698 scope.go:117] "RemoveContainer" containerID="6a91f1c98558c985094715122b03310fbfa74ae0dcc0d5061d8a5af31d53248f" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.487519 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" podStartSLOduration=1.487502488 podStartE2EDuration="1.487502488s" podCreationTimestamp="2026-01-27 14:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:33.487257942 +0000 UTC m=+449.164035427" watchObservedRunningTime="2026-01-27 14:36:33.487502488 +0000 UTC m=+449.164279953" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.507957 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.518739 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9t9sp"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.525756 4698 scope.go:117] "RemoveContainer" containerID="83f40dcbc2a7a8786092ebfb13494b24c6155f2cbc2f5a3d748b31ff118c9308" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.529767 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.537191 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkvv"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.544451 4698 scope.go:117] "RemoveContainer" containerID="eb8741b773b76750d824f71d2335a9dd8415008a2a1af0cc3be54d36ce6b66d8" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.549722 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.556146 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dhlmg"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.560767 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.564824 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwgll"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.565587 4698 scope.go:117] "RemoveContainer" containerID="425a2847620ba4922c31ce894dbf30b724a250adffce28c16f40b91d52222438" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.569684 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.574078 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9m8xd"] Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.584748 4698 scope.go:117] "RemoveContainer" containerID="0c329e85207520291142714c82a2d50fdfb9d97a3b151c7a5f7de6b2145e6bfb" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.604090 4698 scope.go:117] "RemoveContainer" containerID="06d8412c4d5bd31f7f4979e3862a04a4e5bbc3414da496ebc93f1765890e7ef0" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.624943 4698 scope.go:117] "RemoveContainer" containerID="f82faca3f5636bdbc761ab89485503d874426f26edbbe73195bc9dfa132ed985" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.651182 4698 scope.go:117] "RemoveContainer" containerID="465c065be19c59315e19f7fe279ff9934ee6025b5e28cc052feadf4aa0674e63" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.681080 4698 scope.go:117] "RemoveContainer" containerID="5f4f2b8bfea6881493931b100114c2e33da7f225d3731b773088b4c892456f39" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.699355 4698 scope.go:117] "RemoveContainer" containerID="239e275db7c0fb0067627388de9c378c0a51d1f1e41784af4c60e4d6a2aba280" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.719316 4698 scope.go:117] "RemoveContainer" containerID="4f09b91fb4fb1a6cd235db53c68e5a8ee1f9e1816e15e198986bc8cfc8d7105d" Jan 27 14:36:33 crc kubenswrapper[4698]: I0127 14:36:33.737809 4698 scope.go:117] "RemoveContainer" containerID="faa41bdd1ba721c1ff268715721f5c8668d8826924064ffb3ba0d483a3334beb" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.326056 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npzqg"] Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327380 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327394 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327407 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327414 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327424 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327430 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327438 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327445 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327453 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327461 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327474 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327480 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327491 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327498 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327509 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327541 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327550 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327558 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327566 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327574 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327584 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327590 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327600 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327605 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327614 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327620 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="extract-content" Jan 27 14:36:34 crc kubenswrapper[4698]: E0127 14:36:34.327627 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327653 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="extract-utilities" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327781 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327823 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327836 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327846 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.327858 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" containerName="registry-server" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.328016 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" containerName="marketplace-operator" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.328593 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.332697 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.338135 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npzqg"] Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.407981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-catalog-content\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.408021 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-kube-api-access-htxdc\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.408063 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-utilities\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.485916 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8zkn8" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.509478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-utilities\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.509575 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-catalog-content\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.509597 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-kube-api-access-htxdc\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.510323 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-utilities\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.510352 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-catalog-content\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.522998 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7n4t"] Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.524452 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.528522 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.539346 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/bb36f5f4-2a47-4b47-873c-5029fcffc7f5-kube-api-access-htxdc\") pod \"redhat-marketplace-npzqg\" (UID: \"bb36f5f4-2a47-4b47-873c-5029fcffc7f5\") " pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.542949 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7n4t"] Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.610476 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-catalog-content\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.610520 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-utilities\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.610660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vps\" (UniqueName: \"kubernetes.io/projected/e5abf3ba-ee72-4598-be31-5ab117f9b58b-kube-api-access-q6vps\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.656630 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.712452 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vps\" (UniqueName: \"kubernetes.io/projected/e5abf3ba-ee72-4598-be31-5ab117f9b58b-kube-api-access-q6vps\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.712542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-catalog-content\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.712572 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-utilities\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.713090 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-utilities\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.713121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5abf3ba-ee72-4598-be31-5ab117f9b58b-catalog-content\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.734401 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vps\" (UniqueName: \"kubernetes.io/projected/e5abf3ba-ee72-4598-be31-5ab117f9b58b-kube-api-access-q6vps\") pod \"certified-operators-v7n4t\" (UID: \"e5abf3ba-ee72-4598-be31-5ab117f9b58b\") " pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:34 crc kubenswrapper[4698]: I0127 14:36:34.860157 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.000887 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537d845d-d98b-4168-b87b-d0231602f4e9" path="/var/lib/kubelet/pods/537d845d-d98b-4168-b87b-d0231602f4e9/volumes" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.001455 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f32c526-aea0-4758-a1ea-d0a694af3573" path="/var/lib/kubelet/pods/7f32c526-aea0-4758-a1ea-d0a694af3573/volumes" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.002134 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b88242-64d6-469e-a5e4-bc8bab680ded" path="/var/lib/kubelet/pods/b5b88242-64d6-469e-a5e4-bc8bab680ded/volumes" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.003149 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62f9471-7fdf-459f-8e3b-cadad2b6a542" path="/var/lib/kubelet/pods/d62f9471-7fdf-459f-8e3b-cadad2b6a542/volumes" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.006556 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47fa643-2257-49e0-8b1e-77f9d3165c0e" path="/var/lib/kubelet/pods/e47fa643-2257-49e0-8b1e-77f9d3165c0e/volumes" Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.050034 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npzqg"] Jan 27 14:36:35 crc kubenswrapper[4698]: W0127 14:36:35.057073 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb36f5f4_2a47_4b47_873c_5029fcffc7f5.slice/crio-b0925e17952908fba6077b32f3c74c132ad79d00ba9cf0a04effebb5e8bd5b01 WatchSource:0}: Error finding container b0925e17952908fba6077b32f3c74c132ad79d00ba9cf0a04effebb5e8bd5b01: Status 404 returned error can't find the container with id b0925e17952908fba6077b32f3c74c132ad79d00ba9cf0a04effebb5e8bd5b01 Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.243913 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7n4t"] Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.487470 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb36f5f4-2a47-4b47-873c-5029fcffc7f5" containerID="80a36d8662b051918ed1f10f563f07023c172a973211788c470aa167bee0244a" exitCode=0 Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.487527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npzqg" event={"ID":"bb36f5f4-2a47-4b47-873c-5029fcffc7f5","Type":"ContainerDied","Data":"80a36d8662b051918ed1f10f563f07023c172a973211788c470aa167bee0244a"} Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.487578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npzqg" event={"ID":"bb36f5f4-2a47-4b47-873c-5029fcffc7f5","Type":"ContainerStarted","Data":"b0925e17952908fba6077b32f3c74c132ad79d00ba9cf0a04effebb5e8bd5b01"} Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.495294 4698 generic.go:334] "Generic (PLEG): container finished" podID="e5abf3ba-ee72-4598-be31-5ab117f9b58b" containerID="f2005dc0a8ad2ee2efcd5350bcdb013603ad403601a025b08cfcdc4d2d63da72" exitCode=0 Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.495348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7n4t" event={"ID":"e5abf3ba-ee72-4598-be31-5ab117f9b58b","Type":"ContainerDied","Data":"f2005dc0a8ad2ee2efcd5350bcdb013603ad403601a025b08cfcdc4d2d63da72"} Jan 27 14:36:35 crc kubenswrapper[4698]: I0127 14:36:35.495691 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7n4t" event={"ID":"e5abf3ba-ee72-4598-be31-5ab117f9b58b","Type":"ContainerStarted","Data":"c3c92598e78f51981b285d5d6f14b198767de1f7bc395e0b0c1cfdcbc84995e7"} Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.723731 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccjnf"] Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.724963 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.730690 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.738585 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccjnf"] Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.865381 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-utilities\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.865910 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgrd\" (UniqueName: \"kubernetes.io/projected/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-kube-api-access-ctgrd\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.865996 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-catalog-content\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.933450 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.934833 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.937813 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.938717 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.966717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-utilities\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.966772 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgrd\" (UniqueName: \"kubernetes.io/projected/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-kube-api-access-ctgrd\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.966832 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-catalog-content\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.967288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-catalog-content\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.967577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-utilities\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:36 crc kubenswrapper[4698]: I0127 14:36:36.985997 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgrd\" (UniqueName: \"kubernetes.io/projected/3521f3f3-5cfa-4614-9345-7a78f03ed2ce-kube-api-access-ctgrd\") pod \"redhat-operators-ccjnf\" (UID: \"3521f3f3-5cfa-4614-9345-7a78f03ed2ce\") " pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.042502 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.068077 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.068134 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.068185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsxl\" (UniqueName: \"kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.169424 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.169562 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.170113 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.170170 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.170608 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfsxl\" (UniqueName: \"kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.188809 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfsxl\" (UniqueName: \"kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl\") pod \"community-operators-5kvvv\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.267038 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.728954 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccjnf"] Jan 27 14:36:37 crc kubenswrapper[4698]: W0127 14:36:37.739185 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3521f3f3_5cfa_4614_9345_7a78f03ed2ce.slice/crio-3aca4a87edd136e9220e7295683019a359cb12a4fb91013a7c3a3f9a4723f7f8 WatchSource:0}: Error finding container 3aca4a87edd136e9220e7295683019a359cb12a4fb91013a7c3a3f9a4723f7f8: Status 404 returned error can't find the container with id 3aca4a87edd136e9220e7295683019a359cb12a4fb91013a7c3a3f9a4723f7f8 Jan 27 14:36:37 crc kubenswrapper[4698]: I0127 14:36:37.881011 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:36:37 crc kubenswrapper[4698]: W0127 14:36:37.891443 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c9ebc3_a042_4277_a3d4_f4b1f1c6a674.slice/crio-54e9d70336ebbb79259947b1686c17207818558b1b9aed7b7d9dd0e5ea1a48cb WatchSource:0}: Error finding container 54e9d70336ebbb79259947b1686c17207818558b1b9aed7b7d9dd0e5ea1a48cb: Status 404 returned error can't find the container with id 54e9d70336ebbb79259947b1686c17207818558b1b9aed7b7d9dd0e5ea1a48cb Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.511548 4698 generic.go:334] "Generic (PLEG): container finished" podID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerID="ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f" exitCode=0 Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.511756 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerDied","Data":"ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f"} Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.511807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerStarted","Data":"54e9d70336ebbb79259947b1686c17207818558b1b9aed7b7d9dd0e5ea1a48cb"} Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.526696 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb36f5f4-2a47-4b47-873c-5029fcffc7f5" containerID="69bfd486eb38034a7e29da635fe28a3e302526f43b73846575a3894ae7a14fcf" exitCode=0 Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.526773 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npzqg" event={"ID":"bb36f5f4-2a47-4b47-873c-5029fcffc7f5","Type":"ContainerDied","Data":"69bfd486eb38034a7e29da635fe28a3e302526f43b73846575a3894ae7a14fcf"} Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.529784 4698 generic.go:334] "Generic (PLEG): container finished" podID="3521f3f3-5cfa-4614-9345-7a78f03ed2ce" containerID="d39d6d9087812af2f3d3ddf56700d13ce75126f10d7c00115dd8eb3771c0a446" exitCode=0 Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.529811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccjnf" event={"ID":"3521f3f3-5cfa-4614-9345-7a78f03ed2ce","Type":"ContainerDied","Data":"d39d6d9087812af2f3d3ddf56700d13ce75126f10d7c00115dd8eb3771c0a446"} Jan 27 14:36:38 crc kubenswrapper[4698]: I0127 14:36:38.529829 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccjnf" event={"ID":"3521f3f3-5cfa-4614-9345-7a78f03ed2ce","Type":"ContainerStarted","Data":"3aca4a87edd136e9220e7295683019a359cb12a4fb91013a7c3a3f9a4723f7f8"} Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.540243 4698 generic.go:334] "Generic (PLEG): container finished" podID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerID="ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d" exitCode=0 Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.540415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerDied","Data":"ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d"} Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.550253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npzqg" event={"ID":"bb36f5f4-2a47-4b47-873c-5029fcffc7f5","Type":"ContainerStarted","Data":"c859d84ace8b6b071e72d27724bfd13124696ea7f795ff4dd075554fc7466cbd"} Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.552074 4698 generic.go:334] "Generic (PLEG): container finished" podID="e5abf3ba-ee72-4598-be31-5ab117f9b58b" containerID="c3f7ce9c7909b4de507b913b2c2449366df6d4de5363c433b0b945785495a107" exitCode=0 Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.552161 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7n4t" event={"ID":"e5abf3ba-ee72-4598-be31-5ab117f9b58b","Type":"ContainerDied","Data":"c3f7ce9c7909b4de507b913b2c2449366df6d4de5363c433b0b945785495a107"} Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.556963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccjnf" event={"ID":"3521f3f3-5cfa-4614-9345-7a78f03ed2ce","Type":"ContainerStarted","Data":"8e6393859ab406928409f26b6ef117f891f10658ad44f818282cb8cafb6e4be1"} Jan 27 14:36:40 crc kubenswrapper[4698]: I0127 14:36:40.618802 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npzqg" podStartSLOduration=2.170678993 podStartE2EDuration="6.61878368s" podCreationTimestamp="2026-01-27 14:36:34 +0000 UTC" firstStartedPulling="2026-01-27 14:36:35.492993993 +0000 UTC m=+451.169771458" lastFinishedPulling="2026-01-27 14:36:39.94109869 +0000 UTC m=+455.617876145" observedRunningTime="2026-01-27 14:36:40.615898534 +0000 UTC m=+456.292675999" watchObservedRunningTime="2026-01-27 14:36:40.61878368 +0000 UTC m=+456.295561145" Jan 27 14:36:41 crc kubenswrapper[4698]: I0127 14:36:41.563431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerStarted","Data":"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc"} Jan 27 14:36:41 crc kubenswrapper[4698]: I0127 14:36:41.565955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7n4t" event={"ID":"e5abf3ba-ee72-4598-be31-5ab117f9b58b","Type":"ContainerStarted","Data":"1de89a786c4067c4e9848069e88658ff3f58041874264b210507c23934bededf"} Jan 27 14:36:41 crc kubenswrapper[4698]: I0127 14:36:41.569225 4698 generic.go:334] "Generic (PLEG): container finished" podID="3521f3f3-5cfa-4614-9345-7a78f03ed2ce" containerID="8e6393859ab406928409f26b6ef117f891f10658ad44f818282cb8cafb6e4be1" exitCode=0 Jan 27 14:36:41 crc kubenswrapper[4698]: I0127 14:36:41.570053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccjnf" event={"ID":"3521f3f3-5cfa-4614-9345-7a78f03ed2ce","Type":"ContainerDied","Data":"8e6393859ab406928409f26b6ef117f891f10658ad44f818282cb8cafb6e4be1"} Jan 27 14:36:41 crc kubenswrapper[4698]: I0127 14:36:41.587549 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kvvv" podStartSLOduration=3.8817820640000003 podStartE2EDuration="5.587529136s" podCreationTimestamp="2026-01-27 14:36:36 +0000 UTC" firstStartedPulling="2026-01-27 14:36:39.418082429 +0000 UTC m=+455.094859894" lastFinishedPulling="2026-01-27 14:36:41.123829501 +0000 UTC m=+456.800606966" observedRunningTime="2026-01-27 14:36:41.58423103 +0000 UTC m=+457.261008495" watchObservedRunningTime="2026-01-27 14:36:41.587529136 +0000 UTC m=+457.264306601" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.592745 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccjnf" event={"ID":"3521f3f3-5cfa-4614-9345-7a78f03ed2ce","Type":"ContainerStarted","Data":"bfa0f390ce4579951ab93176ca816bd10054e35481941f714ec2acc2e4b3433a"} Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.608243 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccjnf" podStartSLOduration=4.807444681 podStartE2EDuration="8.608224408s" podCreationTimestamp="2026-01-27 14:36:36 +0000 UTC" firstStartedPulling="2026-01-27 14:36:39.41814286 +0000 UTC m=+455.094920325" lastFinishedPulling="2026-01-27 14:36:43.218922597 +0000 UTC m=+458.895700052" observedRunningTime="2026-01-27 14:36:44.607515309 +0000 UTC m=+460.284292784" watchObservedRunningTime="2026-01-27 14:36:44.608224408 +0000 UTC m=+460.285001873" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.609967 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7n4t" podStartSLOduration=4.860019963 podStartE2EDuration="10.609955283s" podCreationTimestamp="2026-01-27 14:36:34 +0000 UTC" firstStartedPulling="2026-01-27 14:36:35.500919154 +0000 UTC m=+451.177696619" lastFinishedPulling="2026-01-27 14:36:41.250854464 +0000 UTC m=+456.927631939" observedRunningTime="2026-01-27 14:36:41.62920868 +0000 UTC m=+457.305986165" watchObservedRunningTime="2026-01-27 14:36:44.609955283 +0000 UTC m=+460.286732748" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.657384 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.658483 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.698253 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.860745 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.861042 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:44 crc kubenswrapper[4698]: I0127 14:36:44.899464 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:45 crc kubenswrapper[4698]: I0127 14:36:45.639486 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npzqg" Jan 27 14:36:46 crc kubenswrapper[4698]: I0127 14:36:46.636331 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7n4t" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.042764 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.042819 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.267375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.267434 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.301423 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:47 crc kubenswrapper[4698]: I0127 14:36:47.650627 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:36:48 crc kubenswrapper[4698]: I0127 14:36:48.078314 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccjnf" podUID="3521f3f3-5cfa-4614-9345-7a78f03ed2ce" containerName="registry-server" probeResult="failure" output=< Jan 27 14:36:48 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 14:36:48 crc kubenswrapper[4698]: > Jan 27 14:36:57 crc kubenswrapper[4698]: I0127 14:36:57.079269 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:36:57 crc kubenswrapper[4698]: I0127 14:36:57.116660 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccjnf" Jan 27 14:38:27 crc kubenswrapper[4698]: I0127 14:38:27.451886 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:38:27 crc kubenswrapper[4698]: I0127 14:38:27.452460 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:38:57 crc kubenswrapper[4698]: I0127 14:38:57.451742 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:38:57 crc kubenswrapper[4698]: I0127 14:38:57.452392 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:39:27 crc kubenswrapper[4698]: I0127 14:39:27.452060 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:39:27 crc kubenswrapper[4698]: I0127 14:39:27.452707 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:39:27 crc kubenswrapper[4698]: I0127 14:39:27.452763 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:39:27 crc kubenswrapper[4698]: I0127 14:39:27.453549 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:39:27 crc kubenswrapper[4698]: I0127 14:39:27.453608 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd" gracePeriod=600 Jan 27 14:39:28 crc kubenswrapper[4698]: I0127 14:39:28.469332 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd" exitCode=0 Jan 27 14:39:28 crc kubenswrapper[4698]: I0127 14:39:28.469403 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd"} Jan 27 14:39:28 crc kubenswrapper[4698]: I0127 14:39:28.469713 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5"} Jan 27 14:39:28 crc kubenswrapper[4698]: I0127 14:39:28.469739 4698 scope.go:117] "RemoveContainer" containerID="16cb4e9dae87be152bb5e32de522e5719275639ace44958853a7750501d682d7" Jan 27 14:39:34 crc kubenswrapper[4698]: I0127 14:39:34.409231 4698 scope.go:117] "RemoveContainer" containerID="4edc3a0f340cf21d0bd2016836059e07ed3ce95eee61b526756d836d954243d0" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.863775 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdxjp"] Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.865136 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.880821 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdxjp"] Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.920942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgss\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-kube-api-access-qxgss\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.920995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-registry-tls\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/983d241c-675f-495a-92e5-c84dbc6bc183-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/983d241c-675f-495a-92e5-c84dbc6bc183-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921117 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-trusted-ca\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921164 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-registry-certificates\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.921230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-bound-sa-token\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:20 crc kubenswrapper[4698]: I0127 14:40:20.942261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022343 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgss\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-kube-api-access-qxgss\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/983d241c-675f-495a-92e5-c84dbc6bc183-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-registry-tls\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022447 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/983d241c-675f-495a-92e5-c84dbc6bc183-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022464 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-trusted-ca\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-registry-certificates\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.022522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-bound-sa-token\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.023381 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/983d241c-675f-495a-92e5-c84dbc6bc183-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.024253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-registry-certificates\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.024616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983d241c-675f-495a-92e5-c84dbc6bc183-trusted-ca\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.028581 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/983d241c-675f-495a-92e5-c84dbc6bc183-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.028581 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-registry-tls\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.041760 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-bound-sa-token\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.043982 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgss\" (UniqueName: \"kubernetes.io/projected/983d241c-675f-495a-92e5-c84dbc6bc183-kube-api-access-qxgss\") pod \"image-registry-66df7c8f76-qdxjp\" (UID: \"983d241c-675f-495a-92e5-c84dbc6bc183\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.180518 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.564109 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdxjp"] Jan 27 14:40:21 crc kubenswrapper[4698]: I0127 14:40:21.759573 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" event={"ID":"983d241c-675f-495a-92e5-c84dbc6bc183","Type":"ContainerStarted","Data":"8ebdb781ef60c39677bb1fa4d9caa2aa47e6b166f7f92026a9d9710fdde6aef3"} Jan 27 14:40:22 crc kubenswrapper[4698]: I0127 14:40:22.770351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" event={"ID":"983d241c-675f-495a-92e5-c84dbc6bc183","Type":"ContainerStarted","Data":"5561bcb2ef22bb9b4ce1c0ef65d95594838e9ee074cb6520c7ea8425e4308765"} Jan 27 14:40:22 crc kubenswrapper[4698]: I0127 14:40:22.770801 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:22 crc kubenswrapper[4698]: I0127 14:40:22.800624 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" podStartSLOduration=2.800605644 podStartE2EDuration="2.800605644s" podCreationTimestamp="2026-01-27 14:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:40:22.796784864 +0000 UTC m=+678.473562329" watchObservedRunningTime="2026-01-27 14:40:22.800605644 +0000 UTC m=+678.477383109" Jan 27 14:40:41 crc kubenswrapper[4698]: I0127 14:40:41.185845 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qdxjp" Jan 27 14:40:41 crc kubenswrapper[4698]: I0127 14:40:41.261480 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.295728 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerName="registry" containerID="cri-o://7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf" gracePeriod=30 Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.654258 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.721535 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwth\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.721788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722032 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722112 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722208 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted\") pod \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\" (UID: \"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8\") " Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.722872 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.727675 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.728575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.735792 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth" (OuterVolumeSpecName: "kube-api-access-gfwth") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "kube-api-access-gfwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.735950 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.736334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.741380 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" (UID: "fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824045 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824099 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824110 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824121 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824130 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824139 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwth\" (UniqueName: \"kubernetes.io/projected/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-kube-api-access-gfwth\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:06 crc kubenswrapper[4698]: I0127 14:41:06.824149 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.000311 4698 generic.go:334] "Generic (PLEG): container finished" podID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerID="7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf" exitCode=0 Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.000376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" event={"ID":"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8","Type":"ContainerDied","Data":"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf"} Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.000412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" event={"ID":"fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8","Type":"ContainerDied","Data":"20dbf4987d8ba1764c6ef92fbeb85c51aefc92ad4d2fc4fecc4e5c0d4fdb463a"} Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.000445 4698 scope.go:117] "RemoveContainer" containerID="7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf" Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.000943 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.015921 4698 scope.go:117] "RemoveContainer" containerID="7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf" Jan 27 14:41:07 crc kubenswrapper[4698]: E0127 14:41:07.016653 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf\": container with ID starting with 7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf not found: ID does not exist" containerID="7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf" Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.016695 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf"} err="failed to get container status \"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf\": rpc error: code = NotFound desc = could not find container \"7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf\": container with ID starting with 7a744eda474bd4d99c2f492bb5fb18fb9a6f9209e64d0226ecc5c31160c7aebf not found: ID does not exist" Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.038733 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:41:07 crc kubenswrapper[4698]: I0127 14:41:07.042047 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vz5fp"] Jan 27 14:41:09 crc kubenswrapper[4698]: I0127 14:41:09.000924 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" path="/var/lib/kubelet/pods/fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8/volumes" Jan 27 14:41:11 crc kubenswrapper[4698]: I0127 14:41:11.540764 4698 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-vz5fp container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.24:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:41:11 crc kubenswrapper[4698]: I0127 14:41:11.541149 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-vz5fp" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.24:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:41:27 crc kubenswrapper[4698]: I0127 14:41:27.451735 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:41:27 crc kubenswrapper[4698]: I0127 14:41:27.452364 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:41:49 crc kubenswrapper[4698]: I0127 14:41:49.102234 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:41:57 crc kubenswrapper[4698]: I0127 14:41:57.452330 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:41:57 crc kubenswrapper[4698]: I0127 14:41:57.453271 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.272915 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h58jw"] Jan 27 14:42:21 crc kubenswrapper[4698]: E0127 14:42:21.274545 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerName="registry" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.274632 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerName="registry" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.274815 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1d6ca9-58c3-4d0f-9b6f-e9dad08632b8" containerName="registry" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.275267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.279624 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xw65b"] Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.280276 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xw65b" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.280674 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.280881 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-njb5q" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.282848 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.287074 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7tz8s" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.288054 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h58jw"] Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.299878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xw65b"] Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.317045 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25xhd"] Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.317846 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.320104 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xvcz4" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.332786 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25xhd"] Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.379350 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9nmc\" (UniqueName: \"kubernetes.io/projected/1c7fda0e-3d43-4d0d-a649-71f9117493c1-kube-api-access-g9nmc\") pod \"cert-manager-cainjector-cf98fcc89-h58jw\" (UID: \"1c7fda0e-3d43-4d0d-a649-71f9117493c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.379421 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxsg\" (UniqueName: \"kubernetes.io/projected/7c7b72cb-94ea-407b-b24c-b9b2d7b33be1-kube-api-access-pkxsg\") pod \"cert-manager-webhook-687f57d79b-25xhd\" (UID: \"7c7b72cb-94ea-407b-b24c-b9b2d7b33be1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.379480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdm57\" (UniqueName: \"kubernetes.io/projected/4a6256b9-f95d-4bee-970e-f903645456ba-kube-api-access-fdm57\") pod \"cert-manager-858654f9db-xw65b\" (UID: \"4a6256b9-f95d-4bee-970e-f903645456ba\") " pod="cert-manager/cert-manager-858654f9db-xw65b" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.480428 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9nmc\" (UniqueName: \"kubernetes.io/projected/1c7fda0e-3d43-4d0d-a649-71f9117493c1-kube-api-access-g9nmc\") pod \"cert-manager-cainjector-cf98fcc89-h58jw\" (UID: \"1c7fda0e-3d43-4d0d-a649-71f9117493c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.480813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxsg\" (UniqueName: \"kubernetes.io/projected/7c7b72cb-94ea-407b-b24c-b9b2d7b33be1-kube-api-access-pkxsg\") pod \"cert-manager-webhook-687f57d79b-25xhd\" (UID: \"7c7b72cb-94ea-407b-b24c-b9b2d7b33be1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.480963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdm57\" (UniqueName: \"kubernetes.io/projected/4a6256b9-f95d-4bee-970e-f903645456ba-kube-api-access-fdm57\") pod \"cert-manager-858654f9db-xw65b\" (UID: \"4a6256b9-f95d-4bee-970e-f903645456ba\") " pod="cert-manager/cert-manager-858654f9db-xw65b" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.499695 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9nmc\" (UniqueName: \"kubernetes.io/projected/1c7fda0e-3d43-4d0d-a649-71f9117493c1-kube-api-access-g9nmc\") pod \"cert-manager-cainjector-cf98fcc89-h58jw\" (UID: \"1c7fda0e-3d43-4d0d-a649-71f9117493c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.499722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxsg\" (UniqueName: \"kubernetes.io/projected/7c7b72cb-94ea-407b-b24c-b9b2d7b33be1-kube-api-access-pkxsg\") pod \"cert-manager-webhook-687f57d79b-25xhd\" (UID: \"7c7b72cb-94ea-407b-b24c-b9b2d7b33be1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.502163 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdm57\" (UniqueName: \"kubernetes.io/projected/4a6256b9-f95d-4bee-970e-f903645456ba-kube-api-access-fdm57\") pod \"cert-manager-858654f9db-xw65b\" (UID: \"4a6256b9-f95d-4bee-970e-f903645456ba\") " pod="cert-manager/cert-manager-858654f9db-xw65b" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.595020 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.606581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xw65b" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.635904 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.811535 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xw65b"] Jan 27 14:42:21 crc kubenswrapper[4698]: W0127 14:42:21.821341 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a6256b9_f95d_4bee_970e_f903645456ba.slice/crio-db19fad5ab2604e2853b62a34d89814ad74741b6bc9a355a00c6f49791807a09 WatchSource:0}: Error finding container db19fad5ab2604e2853b62a34d89814ad74741b6bc9a355a00c6f49791807a09: Status 404 returned error can't find the container with id db19fad5ab2604e2853b62a34d89814ad74741b6bc9a355a00c6f49791807a09 Jan 27 14:42:21 crc kubenswrapper[4698]: I0127 14:42:21.825038 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:42:22 crc kubenswrapper[4698]: I0127 14:42:22.047524 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-h58jw"] Jan 27 14:42:22 crc kubenswrapper[4698]: I0127 14:42:22.114829 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25xhd"] Jan 27 14:42:22 crc kubenswrapper[4698]: W0127 14:42:22.119812 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c7b72cb_94ea_407b_b24c_b9b2d7b33be1.slice/crio-12221dc8acdff14b52d1a3ae9a6dca078eeefcfaa121f9428084c1b34db6879c WatchSource:0}: Error finding container 12221dc8acdff14b52d1a3ae9a6dca078eeefcfaa121f9428084c1b34db6879c: Status 404 returned error can't find the container with id 12221dc8acdff14b52d1a3ae9a6dca078eeefcfaa121f9428084c1b34db6879c Jan 27 14:42:22 crc kubenswrapper[4698]: I0127 14:42:22.389821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xw65b" event={"ID":"4a6256b9-f95d-4bee-970e-f903645456ba","Type":"ContainerStarted","Data":"db19fad5ab2604e2853b62a34d89814ad74741b6bc9a355a00c6f49791807a09"} Jan 27 14:42:22 crc kubenswrapper[4698]: I0127 14:42:22.390975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" event={"ID":"1c7fda0e-3d43-4d0d-a649-71f9117493c1","Type":"ContainerStarted","Data":"a27fd2661e3dfd2611e6308e42ea561af82a9da49f6144e832486ef18c93e370"} Jan 27 14:42:22 crc kubenswrapper[4698]: I0127 14:42:22.392058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" event={"ID":"7c7b72cb-94ea-407b-b24c-b9b2d7b33be1","Type":"ContainerStarted","Data":"12221dc8acdff14b52d1a3ae9a6dca078eeefcfaa121f9428084c1b34db6879c"} Jan 27 14:42:25 crc kubenswrapper[4698]: I0127 14:42:25.415252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xw65b" event={"ID":"4a6256b9-f95d-4bee-970e-f903645456ba","Type":"ContainerStarted","Data":"ed328e4014e033b040aba6ea9b5f54ad4fd3a60b954aa975a45315ca0edd937f"} Jan 27 14:42:25 crc kubenswrapper[4698]: I0127 14:42:25.433281 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xw65b" podStartSLOduration=1.9350100220000002 podStartE2EDuration="4.433262898s" podCreationTimestamp="2026-01-27 14:42:21 +0000 UTC" firstStartedPulling="2026-01-27 14:42:21.824820476 +0000 UTC m=+797.501597941" lastFinishedPulling="2026-01-27 14:42:24.323073352 +0000 UTC m=+799.999850817" observedRunningTime="2026-01-27 14:42:25.429083939 +0000 UTC m=+801.105861424" watchObservedRunningTime="2026-01-27 14:42:25.433262898 +0000 UTC m=+801.110040373" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.426690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" event={"ID":"7c7b72cb-94ea-407b-b24c-b9b2d7b33be1","Type":"ContainerStarted","Data":"e49ca9d4faa9203b4b810dfd9c993b858529e9e2bd298f59f8d31de90e3b9940"} Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.427019 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.440472 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" podStartSLOduration=1.8880531980000002 podStartE2EDuration="6.440454429s" podCreationTimestamp="2026-01-27 14:42:21 +0000 UTC" firstStartedPulling="2026-01-27 14:42:22.122039727 +0000 UTC m=+797.798817192" lastFinishedPulling="2026-01-27 14:42:26.674440958 +0000 UTC m=+802.351218423" observedRunningTime="2026-01-27 14:42:27.439846503 +0000 UTC m=+803.116623978" watchObservedRunningTime="2026-01-27 14:42:27.440454429 +0000 UTC m=+803.117231894" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.452196 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.452247 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.452287 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.452870 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:42:27 crc kubenswrapper[4698]: I0127 14:42:27.452919 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5" gracePeriod=600 Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.433858 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" event={"ID":"1c7fda0e-3d43-4d0d-a649-71f9117493c1","Type":"ContainerStarted","Data":"23f51425b83d0531a12ae47b2d9e7e85a5784a6098e1f4c001d6ca713237d3e8"} Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.438345 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5" exitCode=0 Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.438405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5"} Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.438744 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0"} Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.438769 4698 scope.go:117] "RemoveContainer" containerID="e23d02dca02c560d54d4580f818ec0b2d4b146297f0df6a3d6670a06f9ad5cdd" Jan 27 14:42:28 crc kubenswrapper[4698]: I0127 14:42:28.454156 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-h58jw" podStartSLOduration=1.850352888 podStartE2EDuration="7.454139119s" podCreationTimestamp="2026-01-27 14:42:21 +0000 UTC" firstStartedPulling="2026-01-27 14:42:22.050567149 +0000 UTC m=+797.727344614" lastFinishedPulling="2026-01-27 14:42:27.65435338 +0000 UTC m=+803.331130845" observedRunningTime="2026-01-27 14:42:28.450674718 +0000 UTC m=+804.127452193" watchObservedRunningTime="2026-01-27 14:42:28.454139119 +0000 UTC m=+804.130916574" Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.863788 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xmpm6"] Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865803 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="nbdb" containerID="cri-o://c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865939 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="sbdb" containerID="cri-o://0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865942 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="northd" containerID="cri-o://7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865997 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-controller" containerID="cri-o://87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865954 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-node" containerID="cri-o://d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.866009 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-acl-logging" containerID="cri-o://b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.865946 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" gracePeriod=30 Jan 27 14:42:30 crc kubenswrapper[4698]: I0127 14:42:30.896389 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" containerID="cri-o://b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" gracePeriod=30 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.134468 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/3.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.136853 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovn-acl-logging/0.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.137325 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovn-controller/0.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.137752 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190187 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t6z2j"] Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190467 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190482 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190491 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190499 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190508 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-node" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190515 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-node" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190525 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190533 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190542 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190549 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190561 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="nbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190568 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="nbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190580 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-acl-logging" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190587 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-acl-logging" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190596 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="northd" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190602 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="northd" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190611 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kubecfg-setup" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190616 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kubecfg-setup" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190625 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190630 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190662 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="sbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190669 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="sbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190805 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190818 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190825 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="nbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190836 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="sbdb" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190844 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190853 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-acl-logging" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190864 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190874 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovn-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190882 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="northd" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190889 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-node" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.190896 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.190995 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.191004 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.191114 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.191219 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.191227 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerName="ovnkube-controller" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.193077 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206292 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206356 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7gpg\" (UniqueName: \"kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206380 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206410 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206428 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206452 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206487 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206512 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206534 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206478 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206575 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206609 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206611 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206658 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206721 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206737 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206760 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206812 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.206828 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash\") pod \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\" (UID: \"c59a9d01-79ce-42d9-a41d-39d7d73cb03e\") " Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.207000 4698 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.207010 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.207071 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash" (OuterVolumeSpecName: "host-slash") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.207100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.207786 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.208248 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.208369 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.208978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209016 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209255 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209328 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209380 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log" (OuterVolumeSpecName: "node-log") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209403 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket" (OuterVolumeSpecName: "log-socket") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209426 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.209841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.210414 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.213413 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.213789 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg" (OuterVolumeSpecName: "kube-api-access-r7gpg") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "kube-api-access-r7gpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.222741 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c59a9d01-79ce-42d9-a41d-39d7d73cb03e" (UID: "c59a9d01-79ce-42d9-a41d-39d7d73cb03e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-slash\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308731 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-kubelet\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308765 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-script-lib\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-systemd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308815 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.308871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-ovn\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-log-socket\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovn-node-metrics-cert\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msltt\" (UniqueName: \"kubernetes.io/projected/863708bd-6c5a-45cb-9770-a3c78eef7c2d-kube-api-access-msltt\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309306 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-netns\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309536 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-node-log\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309602 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-netd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309688 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-etc-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-var-lib-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309790 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-env-overrides\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-bin\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309928 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-config\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.309955 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-systemd-units\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310092 4698 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310153 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310193 4698 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310215 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310232 4698 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310247 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310269 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310284 4698 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310300 4698 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310316 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310335 4698 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310352 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310365 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310381 4698 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310396 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7gpg\" (UniqueName: \"kubernetes.io/projected/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-kube-api-access-r7gpg\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310413 4698 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310430 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.310442 4698 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c59a9d01-79ce-42d9-a41d-39d7d73cb03e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411116 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-systemd-units\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-slash\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-systemd-units\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411322 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-slash\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411288 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-kubelet\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411426 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-script-lib\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-systemd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-ovn\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411580 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-log-socket\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411599 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovn-node-metrics-cert\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msltt\" (UniqueName: \"kubernetes.io/projected/863708bd-6c5a-45cb-9770-a3c78eef7c2d-kube-api-access-msltt\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-netns\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411779 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-node-log\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-netd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411811 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-etc-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-var-lib-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-env-overrides\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-bin\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-config\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412376 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-script-lib\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.411355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-kubelet\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412489 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412522 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-systemd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovnkube-config\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-run-ovn\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412612 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-run-netns\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412666 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-netd\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-etc-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-log-socket\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412746 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-node-log\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.412769 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-var-lib-openvswitch\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.413112 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/863708bd-6c5a-45cb-9770-a3c78eef7c2d-host-cni-bin\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.413812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/863708bd-6c5a-45cb-9770-a3c78eef7c2d-env-overrides\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.415846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/863708bd-6c5a-45cb-9770-a3c78eef7c2d-ovn-node-metrics-cert\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.433289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msltt\" (UniqueName: \"kubernetes.io/projected/863708bd-6c5a-45cb-9770-a3c78eef7c2d-kube-api-access-msltt\") pod \"ovnkube-node-t6z2j\" (UID: \"863708bd-6c5a-45cb-9770-a3c78eef7c2d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.460536 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovnkube-controller/3.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.463179 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovn-acl-logging/0.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.463723 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xmpm6_c59a9d01-79ce-42d9-a41d-39d7d73cb03e/ovn-controller/0.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464199 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464232 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464242 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464251 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464260 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464269 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464277 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" exitCode=143 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464287 4698 generic.go:334] "Generic (PLEG): container finished" podID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" exitCode=143 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464380 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464430 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464443 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464450 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464460 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464467 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464475 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464482 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464490 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464497 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464517 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464524 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464531 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464538 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464545 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464552 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464558 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464565 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464572 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464579 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464588 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464599 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464607 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464614 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464621 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464628 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464634 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464671 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464677 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464683 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464689 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464699 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" event={"ID":"c59a9d01-79ce-42d9-a41d-39d7d73cb03e","Type":"ContainerDied","Data":"def4956a0c71ce17e5e156a028a92f25cde69642c1da1c485d5532854ba70206"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464709 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464716 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464722 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464729 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464735 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464744 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464751 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464757 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464764 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464771 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464789 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.464934 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xmpm6" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.471173 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/2.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.471666 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/1.log" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.471712 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e135f0c-0c36-44f4-afeb-06994affb352" containerID="91a0ad962cfd3e8dd9cfc25516b20509e0465ea2c094eaabc513521dcb809be2" exitCode=2 Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.471743 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerDied","Data":"91a0ad962cfd3e8dd9cfc25516b20509e0465ea2c094eaabc513521dcb809be2"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.471763 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9"} Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.472202 4698 scope.go:117] "RemoveContainer" containerID="91a0ad962cfd3e8dd9cfc25516b20509e0465ea2c094eaabc513521dcb809be2" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.495567 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.510869 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xmpm6"] Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.515366 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xmpm6"] Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.526265 4698 scope.go:117] "RemoveContainer" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.539604 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.544461 4698 scope.go:117] "RemoveContainer" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.558677 4698 scope.go:117] "RemoveContainer" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.574673 4698 scope.go:117] "RemoveContainer" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.595707 4698 scope.go:117] "RemoveContainer" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.614035 4698 scope.go:117] "RemoveContainer" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.627856 4698 scope.go:117] "RemoveContainer" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.638617 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-25xhd" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.647048 4698 scope.go:117] "RemoveContainer" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.671397 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.672060 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672121 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} err="failed to get container status \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672153 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.672442 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": container with ID starting with de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239 not found: ID does not exist" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672460 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} err="failed to get container status \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": rpc error: code = NotFound desc = could not find container \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": container with ID starting with de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672473 4698 scope.go:117] "RemoveContainer" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.672679 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": container with ID starting with 0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da not found: ID does not exist" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672695 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} err="failed to get container status \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": rpc error: code = NotFound desc = could not find container \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": container with ID starting with 0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672706 4698 scope.go:117] "RemoveContainer" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.672874 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": container with ID starting with c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c not found: ID does not exist" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672897 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} err="failed to get container status \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": rpc error: code = NotFound desc = could not find container \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": container with ID starting with c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.672920 4698 scope.go:117] "RemoveContainer" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.673140 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": container with ID starting with 7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791 not found: ID does not exist" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673174 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} err="failed to get container status \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": rpc error: code = NotFound desc = could not find container \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": container with ID starting with 7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673200 4698 scope.go:117] "RemoveContainer" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.673452 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": container with ID starting with ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09 not found: ID does not exist" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673483 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} err="failed to get container status \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": rpc error: code = NotFound desc = could not find container \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": container with ID starting with ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673501 4698 scope.go:117] "RemoveContainer" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.673745 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": container with ID starting with d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98 not found: ID does not exist" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673774 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} err="failed to get container status \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": rpc error: code = NotFound desc = could not find container \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": container with ID starting with d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.673792 4698 scope.go:117] "RemoveContainer" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.674019 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": container with ID starting with b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b not found: ID does not exist" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674040 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} err="failed to get container status \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": rpc error: code = NotFound desc = could not find container \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": container with ID starting with b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674054 4698 scope.go:117] "RemoveContainer" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.674256 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": container with ID starting with 87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81 not found: ID does not exist" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674275 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} err="failed to get container status \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": rpc error: code = NotFound desc = could not find container \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": container with ID starting with 87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674290 4698 scope.go:117] "RemoveContainer" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: E0127 14:42:31.674611 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": container with ID starting with b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c not found: ID does not exist" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674653 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} err="failed to get container status \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": rpc error: code = NotFound desc = could not find container \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": container with ID starting with b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.674671 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675080 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} err="failed to get container status \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675101 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675434 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} err="failed to get container status \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": rpc error: code = NotFound desc = could not find container \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": container with ID starting with de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675457 4698 scope.go:117] "RemoveContainer" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675727 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} err="failed to get container status \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": rpc error: code = NotFound desc = could not find container \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": container with ID starting with 0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.675756 4698 scope.go:117] "RemoveContainer" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676085 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} err="failed to get container status \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": rpc error: code = NotFound desc = could not find container \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": container with ID starting with c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676111 4698 scope.go:117] "RemoveContainer" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676394 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} err="failed to get container status \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": rpc error: code = NotFound desc = could not find container \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": container with ID starting with 7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676422 4698 scope.go:117] "RemoveContainer" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676615 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} err="failed to get container status \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": rpc error: code = NotFound desc = could not find container \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": container with ID starting with ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676650 4698 scope.go:117] "RemoveContainer" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.676987 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} err="failed to get container status \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": rpc error: code = NotFound desc = could not find container \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": container with ID starting with d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677008 4698 scope.go:117] "RemoveContainer" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677402 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} err="failed to get container status \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": rpc error: code = NotFound desc = could not find container \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": container with ID starting with b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677422 4698 scope.go:117] "RemoveContainer" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677649 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} err="failed to get container status \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": rpc error: code = NotFound desc = could not find container \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": container with ID starting with 87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677688 4698 scope.go:117] "RemoveContainer" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677957 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} err="failed to get container status \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": rpc error: code = NotFound desc = could not find container \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": container with ID starting with b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.677979 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.678381 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} err="failed to get container status \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.678422 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.678742 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} err="failed to get container status \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": rpc error: code = NotFound desc = could not find container \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": container with ID starting with de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.678771 4698 scope.go:117] "RemoveContainer" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.679984 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} err="failed to get container status \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": rpc error: code = NotFound desc = could not find container \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": container with ID starting with 0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680006 4698 scope.go:117] "RemoveContainer" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680249 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} err="failed to get container status \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": rpc error: code = NotFound desc = could not find container \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": container with ID starting with c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680274 4698 scope.go:117] "RemoveContainer" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680593 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} err="failed to get container status \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": rpc error: code = NotFound desc = could not find container \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": container with ID starting with 7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680612 4698 scope.go:117] "RemoveContainer" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680839 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} err="failed to get container status \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": rpc error: code = NotFound desc = could not find container \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": container with ID starting with ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.680864 4698 scope.go:117] "RemoveContainer" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681120 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} err="failed to get container status \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": rpc error: code = NotFound desc = could not find container \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": container with ID starting with d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681141 4698 scope.go:117] "RemoveContainer" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681423 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} err="failed to get container status \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": rpc error: code = NotFound desc = could not find container \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": container with ID starting with b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681447 4698 scope.go:117] "RemoveContainer" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681679 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} err="failed to get container status \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": rpc error: code = NotFound desc = could not find container \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": container with ID starting with 87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681705 4698 scope.go:117] "RemoveContainer" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.681979 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} err="failed to get container status \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": rpc error: code = NotFound desc = could not find container \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": container with ID starting with b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.682000 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.682352 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} err="failed to get container status \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.682376 4698 scope.go:117] "RemoveContainer" containerID="de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.682825 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239"} err="failed to get container status \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": rpc error: code = NotFound desc = could not find container \"de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239\": container with ID starting with de8c9c2d3589a38d1cf6232c63b2ef2b7e60ba5c9efe357cfa5169dfe0448239 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.682847 4698 scope.go:117] "RemoveContainer" containerID="0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.683337 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da"} err="failed to get container status \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": rpc error: code = NotFound desc = could not find container \"0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da\": container with ID starting with 0356c9c90e9c8e65b9452361aa486ffe95757bd69803077b32fe7ba0223ae1da not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.683361 4698 scope.go:117] "RemoveContainer" containerID="c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.683602 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c"} err="failed to get container status \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": rpc error: code = NotFound desc = could not find container \"c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c\": container with ID starting with c98fd6e3005c979f95419686ec2143aa282fa56d7306ecaff766759e34b68b2c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.683625 4698 scope.go:117] "RemoveContainer" containerID="7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684152 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791"} err="failed to get container status \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": rpc error: code = NotFound desc = could not find container \"7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791\": container with ID starting with 7e38585fe9db52e6e7cc5c3a41e23a0bb761977a3b25fd281d86458f422e3791 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684174 4698 scope.go:117] "RemoveContainer" containerID="ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684476 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09"} err="failed to get container status \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": rpc error: code = NotFound desc = could not find container \"ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09\": container with ID starting with ea6d17f5861064198539f0b84b559f9d28cfa5decaa89a9059469af5a3e12e09 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684507 4698 scope.go:117] "RemoveContainer" containerID="d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684817 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98"} err="failed to get container status \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": rpc error: code = NotFound desc = could not find container \"d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98\": container with ID starting with d81778ae40167de932debf7266f3c793d77edeace2116220c21cf89e15e4cc98 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.684838 4698 scope.go:117] "RemoveContainer" containerID="b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685070 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b"} err="failed to get container status \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": rpc error: code = NotFound desc = could not find container \"b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b\": container with ID starting with b80c023f92c9854f8c16b6b50c10dac58f968938615375019ba2875d9cd69e5b not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685096 4698 scope.go:117] "RemoveContainer" containerID="87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685346 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81"} err="failed to get container status \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": rpc error: code = NotFound desc = could not find container \"87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81\": container with ID starting with 87e7499ab247dff50eee2a5e113ed36ff00c467e8e1bcdfee45e8835815d4b81 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685367 4698 scope.go:117] "RemoveContainer" containerID="b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685596 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c"} err="failed to get container status \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": rpc error: code = NotFound desc = could not find container \"b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c\": container with ID starting with b3a80917737e61fdb510dabf332c3bd87fb858d27849c42b77703a8becb66b4c not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685617 4698 scope.go:117] "RemoveContainer" containerID="b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7" Jan 27 14:42:31 crc kubenswrapper[4698]: I0127 14:42:31.685832 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7"} err="failed to get container status \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": rpc error: code = NotFound desc = could not find container \"b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7\": container with ID starting with b43e70d356eae3b4cf67c3dae3a670888693ad5e68e6fc6027a2e42f9a1618d7 not found: ID does not exist" Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.477356 4698 generic.go:334] "Generic (PLEG): container finished" podID="863708bd-6c5a-45cb-9770-a3c78eef7c2d" containerID="121fe8cd52dbb5b0309d06955695261bb35ba1bc10efec6f70e5242ab9a0ace6" exitCode=0 Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.477405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerDied","Data":"121fe8cd52dbb5b0309d06955695261bb35ba1bc10efec6f70e5242ab9a0ace6"} Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.477460 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"b79454d59972a224f394600f5d86d76489df13ae04e1526d509b199dab2e4793"} Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.481861 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/2.log" Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.482249 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/1.log" Jan 27 14:42:32 crc kubenswrapper[4698]: I0127 14:42:32.482304 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2kkn" event={"ID":"4e135f0c-0c36-44f4-afeb-06994affb352","Type":"ContainerStarted","Data":"88afe2592bfa4715bd3ca423c472b5bdc4af24590998b8b88064302ca42e6793"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.001864 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59a9d01-79ce-42d9-a41d-39d7d73cb03e" path="/var/lib/kubelet/pods/c59a9d01-79ce-42d9-a41d-39d7d73cb03e/volumes" Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.491609 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"6bc4eef4e7f98296dc04bc0c1a1e5409597c503d99d4d265d12b128db51fe0f4"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.491924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"bed54f9095aef1d21e55b39e4f73af6f8c0d76b5a7edd24745592b97bfb575b8"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.491946 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"3bb5836e03cacd8895ddabf601fe528e5c80a82f2d5945868927b253c3a79c33"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.491958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"3b9ad733cd3ca47411fb37845c73a102cdeaad22a0f6df75837a0d31cd8204ad"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.491970 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"978eb396dcc3119e88c14bbb5fcaf2b4640308855caddbe075a3db571e12cd09"} Jan 27 14:42:33 crc kubenswrapper[4698]: I0127 14:42:33.492001 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"a36aeec9c6f3534b15c20e9fff27290d0fe3721a1a7e0af7f48f5ef01aa6bde6"} Jan 27 14:42:34 crc kubenswrapper[4698]: I0127 14:42:34.473840 4698 scope.go:117] "RemoveContainer" containerID="89ea9fe8283890c94741924f7a0d219ad6a55833e836517077a72a10f87427d9" Jan 27 14:42:35 crc kubenswrapper[4698]: I0127 14:42:35.502823 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2kkn_4e135f0c-0c36-44f4-afeb-06994affb352/kube-multus/2.log" Jan 27 14:42:35 crc kubenswrapper[4698]: I0127 14:42:35.506587 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"de70ef2e5a2768116b4032896344b9b371d21f8541c96c106096b1aee0dbee31"} Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.525914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" event={"ID":"863708bd-6c5a-45cb-9770-a3c78eef7c2d","Type":"ContainerStarted","Data":"1a4f35d2a5a3906bb1f6774980513398a35edded26ebead725a3c03dcdf9d7e5"} Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.526426 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.526440 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.526449 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.551901 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.552124 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:42:38 crc kubenswrapper[4698]: I0127 14:42:38.560553 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" podStartSLOduration=7.560538342 podStartE2EDuration="7.560538342s" podCreationTimestamp="2026-01-27 14:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:42:38.555994483 +0000 UTC m=+814.232771978" watchObservedRunningTime="2026-01-27 14:42:38.560538342 +0000 UTC m=+814.237315807" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.560804 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t6z2j" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.794322 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc"] Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.795344 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.797211 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.808943 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc"] Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.920781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xjd\" (UniqueName: \"kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.920937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:01 crc kubenswrapper[4698]: I0127 14:43:01.920969 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.022583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.022659 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.022733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xjd\" (UniqueName: \"kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.023175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.023216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.043392 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xjd\" (UniqueName: \"kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.113367 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:02 crc kubenswrapper[4698]: I0127 14:43:02.685063 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc"] Jan 27 14:43:03 crc kubenswrapper[4698]: I0127 14:43:03.674478 4698 generic.go:334] "Generic (PLEG): container finished" podID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerID="04600aab272ed48b634adaba641e824cf0399020fcc608897a4bf44b83ef8eba" exitCode=0 Jan 27 14:43:03 crc kubenswrapper[4698]: I0127 14:43:03.674533 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" event={"ID":"a902db54-8ee1-4cf9-a027-52e406f6c05b","Type":"ContainerDied","Data":"04600aab272ed48b634adaba641e824cf0399020fcc608897a4bf44b83ef8eba"} Jan 27 14:43:03 crc kubenswrapper[4698]: I0127 14:43:03.674569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" event={"ID":"a902db54-8ee1-4cf9-a027-52e406f6c05b","Type":"ContainerStarted","Data":"c10fe7997b4befaa536f1d84e3a7f3930d929e85e12377ffcfc9c227f60e7d71"} Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.140134 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.142015 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.154210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.250748 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.250805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.250975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwpx\" (UniqueName: \"kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.352170 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.352875 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.352805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.352995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwpx\" (UniqueName: \"kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.353269 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.374724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwpx\" (UniqueName: \"kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx\") pod \"redhat-operators-k8d5g\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.466563 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.663214 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:04 crc kubenswrapper[4698]: W0127 14:43:04.674902 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d3befe_7722_4089_82c0_17f75d62a419.slice/crio-93d4f7badd3a4ea764aeb1bb78763adfab15bfa05987b1a63047eee307f7c25e WatchSource:0}: Error finding container 93d4f7badd3a4ea764aeb1bb78763adfab15bfa05987b1a63047eee307f7c25e: Status 404 returned error can't find the container with id 93d4f7badd3a4ea764aeb1bb78763adfab15bfa05987b1a63047eee307f7c25e Jan 27 14:43:04 crc kubenswrapper[4698]: I0127 14:43:04.683152 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerStarted","Data":"93d4f7badd3a4ea764aeb1bb78763adfab15bfa05987b1a63047eee307f7c25e"} Jan 27 14:43:05 crc kubenswrapper[4698]: I0127 14:43:05.690278 4698 generic.go:334] "Generic (PLEG): container finished" podID="e2d3befe-7722-4089-82c0-17f75d62a419" containerID="a6ef77124cc9594a83908fb07d09232f764aca02799ad6143793b7b7fd1f144e" exitCode=0 Jan 27 14:43:05 crc kubenswrapper[4698]: I0127 14:43:05.690368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerDied","Data":"a6ef77124cc9594a83908fb07d09232f764aca02799ad6143793b7b7fd1f144e"} Jan 27 14:43:05 crc kubenswrapper[4698]: I0127 14:43:05.694708 4698 generic.go:334] "Generic (PLEG): container finished" podID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerID="3a746f0a93019bd9b581ff524e373abfb2f16d545076261c656300b68ffd85f4" exitCode=0 Jan 27 14:43:05 crc kubenswrapper[4698]: I0127 14:43:05.694739 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" event={"ID":"a902db54-8ee1-4cf9-a027-52e406f6c05b","Type":"ContainerDied","Data":"3a746f0a93019bd9b581ff524e373abfb2f16d545076261c656300b68ffd85f4"} Jan 27 14:43:06 crc kubenswrapper[4698]: I0127 14:43:06.702072 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerStarted","Data":"8118501d7f2cefaedcba825fd1b5dd7f0a2343f82e4ec8a9db061c2d66bee690"} Jan 27 14:43:06 crc kubenswrapper[4698]: I0127 14:43:06.704565 4698 generic.go:334] "Generic (PLEG): container finished" podID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerID="5302a79ce894e4dcd59ac77d46c79cc47173fd1ce395329f902daba5408ad933" exitCode=0 Jan 27 14:43:06 crc kubenswrapper[4698]: I0127 14:43:06.704622 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" event={"ID":"a902db54-8ee1-4cf9-a027-52e406f6c05b","Type":"ContainerDied","Data":"5302a79ce894e4dcd59ac77d46c79cc47173fd1ce395329f902daba5408ad933"} Jan 27 14:43:07 crc kubenswrapper[4698]: I0127 14:43:07.712158 4698 generic.go:334] "Generic (PLEG): container finished" podID="e2d3befe-7722-4089-82c0-17f75d62a419" containerID="8118501d7f2cefaedcba825fd1b5dd7f0a2343f82e4ec8a9db061c2d66bee690" exitCode=0 Jan 27 14:43:07 crc kubenswrapper[4698]: I0127 14:43:07.712257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerDied","Data":"8118501d7f2cefaedcba825fd1b5dd7f0a2343f82e4ec8a9db061c2d66bee690"} Jan 27 14:43:07 crc kubenswrapper[4698]: I0127 14:43:07.927698 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.000797 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8xjd\" (UniqueName: \"kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd\") pod \"a902db54-8ee1-4cf9-a027-52e406f6c05b\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.000863 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle\") pod \"a902db54-8ee1-4cf9-a027-52e406f6c05b\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.000971 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util\") pod \"a902db54-8ee1-4cf9-a027-52e406f6c05b\" (UID: \"a902db54-8ee1-4cf9-a027-52e406f6c05b\") " Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.006911 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle" (OuterVolumeSpecName: "bundle") pod "a902db54-8ee1-4cf9-a027-52e406f6c05b" (UID: "a902db54-8ee1-4cf9-a027-52e406f6c05b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.011980 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd" (OuterVolumeSpecName: "kube-api-access-q8xjd") pod "a902db54-8ee1-4cf9-a027-52e406f6c05b" (UID: "a902db54-8ee1-4cf9-a027-52e406f6c05b"). InnerVolumeSpecName "kube-api-access-q8xjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.104412 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8xjd\" (UniqueName: \"kubernetes.io/projected/a902db54-8ee1-4cf9-a027-52e406f6c05b-kube-api-access-q8xjd\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.104462 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.432728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util" (OuterVolumeSpecName: "util") pod "a902db54-8ee1-4cf9-a027-52e406f6c05b" (UID: "a902db54-8ee1-4cf9-a027-52e406f6c05b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.510181 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a902db54-8ee1-4cf9-a027-52e406f6c05b-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.719592 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" event={"ID":"a902db54-8ee1-4cf9-a027-52e406f6c05b","Type":"ContainerDied","Data":"c10fe7997b4befaa536f1d84e3a7f3930d929e85e12377ffcfc9c227f60e7d71"} Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.719656 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10fe7997b4befaa536f1d84e3a7f3930d929e85e12377ffcfc9c227f60e7d71" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.719728 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc" Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.722705 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerStarted","Data":"d12a32546e67e87b7d4d8ca457f8fbd397481e04cfaddd5edc43aa83d21aae62"} Jan 27 14:43:08 crc kubenswrapper[4698]: I0127 14:43:08.745182 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8d5g" podStartSLOduration=1.8529259059999998 podStartE2EDuration="4.745165655s" podCreationTimestamp="2026-01-27 14:43:04 +0000 UTC" firstStartedPulling="2026-01-27 14:43:05.692460379 +0000 UTC m=+841.369237844" lastFinishedPulling="2026-01-27 14:43:08.584700128 +0000 UTC m=+844.261477593" observedRunningTime="2026-01-27 14:43:08.74228838 +0000 UTC m=+844.419065865" watchObservedRunningTime="2026-01-27 14:43:08.745165655 +0000 UTC m=+844.421943110" Jan 27 14:43:14 crc kubenswrapper[4698]: I0127 14:43:14.467554 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:14 crc kubenswrapper[4698]: I0127 14:43:14.468190 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:15 crc kubenswrapper[4698]: I0127 14:43:15.552580 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8d5g" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="registry-server" probeResult="failure" output=< Jan 27 14:43:15 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 14:43:15 crc kubenswrapper[4698]: > Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.484987 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26"] Jan 27 14:43:19 crc kubenswrapper[4698]: E0127 14:43:19.485494 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="util" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.485510 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="util" Jan 27 14:43:19 crc kubenswrapper[4698]: E0127 14:43:19.485527 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="pull" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.485535 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="pull" Jan 27 14:43:19 crc kubenswrapper[4698]: E0127 14:43:19.485548 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="extract" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.485557 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="extract" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.485689 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a902db54-8ee1-4cf9-a027-52e406f6c05b" containerName="extract" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.486118 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.491082 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2qj4b" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.491267 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.492394 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.500947 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.538614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgrgl\" (UniqueName: \"kubernetes.io/projected/49495d53-72e5-4381-bb5f-efb39b15a87a-kube-api-access-kgrgl\") pod \"obo-prometheus-operator-68bc856cb9-7xz26\" (UID: \"49495d53-72e5-4381-bb5f-efb39b15a87a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.613004 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.613701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.618728 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.619985 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.622819 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kmprb" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.623869 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.633374 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.637029 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.639526 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgrgl\" (UniqueName: \"kubernetes.io/projected/49495d53-72e5-4381-bb5f-efb39b15a87a-kube-api-access-kgrgl\") pod \"obo-prometheus-operator-68bc856cb9-7xz26\" (UID: \"49495d53-72e5-4381-bb5f-efb39b15a87a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.674620 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgrgl\" (UniqueName: \"kubernetes.io/projected/49495d53-72e5-4381-bb5f-efb39b15a87a-kube-api-access-kgrgl\") pod \"obo-prometheus-operator-68bc856cb9-7xz26\" (UID: \"49495d53-72e5-4381-bb5f-efb39b15a87a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.741102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.741241 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.741289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.741321 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.803505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.825954 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sxnvw"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.826848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.832746 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-976td" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.832997 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.837015 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sxnvw"] Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.843230 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.843307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.843376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.843407 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.850916 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.853155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.853806 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e0708a8-b876-48dd-8a58-35ba86739ddf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-qvbmr\" (UID: \"4e0708a8-b876-48dd-8a58-35ba86739ddf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.855006 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a8167e-8dc5-4360-a7b4-623198852230-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-698f689c98-d5twk\" (UID: \"44a8167e-8dc5-4360-a7b4-623198852230\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.929527 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.944486 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b638465-7c66-495b-9dfd-1854fea80351-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.944576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw54d\" (UniqueName: \"kubernetes.io/projected/7b638465-7c66-495b-9dfd-1854fea80351-kube-api-access-sw54d\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:19 crc kubenswrapper[4698]: I0127 14:43:19.946320 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.046744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b638465-7c66-495b-9dfd-1854fea80351-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.046843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw54d\" (UniqueName: \"kubernetes.io/projected/7b638465-7c66-495b-9dfd-1854fea80351-kube-api-access-sw54d\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.054320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b638465-7c66-495b-9dfd-1854fea80351-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.073235 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cl5tl"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.073935 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.078010 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-7gx7l" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.081498 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw54d\" (UniqueName: \"kubernetes.io/projected/7b638465-7c66-495b-9dfd-1854fea80351-kube-api-access-sw54d\") pod \"observability-operator-59bdc8b94-sxnvw\" (UID: \"7b638465-7c66-495b-9dfd-1854fea80351\") " pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.086113 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cl5tl"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.152345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/40335850-9929-4547-8b67-232394389f88-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.152681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xpt\" (UniqueName: \"kubernetes.io/projected/40335850-9929-4547-8b67-232394389f88-kube-api-access-d6xpt\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.189157 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.254171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/40335850-9929-4547-8b67-232394389f88-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.254216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xpt\" (UniqueName: \"kubernetes.io/projected/40335850-9929-4547-8b67-232394389f88-kube-api-access-d6xpt\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.255457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/40335850-9929-4547-8b67-232394389f88-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.272654 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xpt\" (UniqueName: \"kubernetes.io/projected/40335850-9929-4547-8b67-232394389f88-kube-api-access-d6xpt\") pod \"perses-operator-5bf474d74f-cl5tl\" (UID: \"40335850-9929-4547-8b67-232394389f88\") " pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.359760 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.393797 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26"] Jan 27 14:43:20 crc kubenswrapper[4698]: W0127 14:43:20.400750 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49495d53_72e5_4381_bb5f_efb39b15a87a.slice/crio-99c01399033fc5c5502f77a7bf0d6e638837b4f222d9d532ad1277dad978d31f WatchSource:0}: Error finding container 99c01399033fc5c5502f77a7bf0d6e638837b4f222d9d532ad1277dad978d31f: Status 404 returned error can't find the container with id 99c01399033fc5c5502f77a7bf0d6e638837b4f222d9d532ad1277dad978d31f Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.436500 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.450454 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.506419 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sxnvw"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.677846 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cl5tl"] Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.787087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" event={"ID":"4e0708a8-b876-48dd-8a58-35ba86739ddf","Type":"ContainerStarted","Data":"e017623cf3010f42d238704ca6d6ed5025d87baccfbbc83697c3f2f4b819b099"} Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.788262 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" event={"ID":"40335850-9929-4547-8b67-232394389f88","Type":"ContainerStarted","Data":"c9c4d4b7468e1861f3a6288acfcb69b94faa0174064985040313ce743801110b"} Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.789467 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" event={"ID":"7b638465-7c66-495b-9dfd-1854fea80351","Type":"ContainerStarted","Data":"04f86985199296b478e4d8f9576b2114621a5b044920ba81225ebcdd6c79a24b"} Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.790473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" event={"ID":"49495d53-72e5-4381-bb5f-efb39b15a87a","Type":"ContainerStarted","Data":"99c01399033fc5c5502f77a7bf0d6e638837b4f222d9d532ad1277dad978d31f"} Jan 27 14:43:20 crc kubenswrapper[4698]: I0127 14:43:20.791345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" event={"ID":"44a8167e-8dc5-4360-a7b4-623198852230","Type":"ContainerStarted","Data":"6b4f8efb204dd531aa2bd13ebb224d2db4d4ed5784a3d6b9bc1c5005e1299cf8"} Jan 27 14:43:24 crc kubenswrapper[4698]: I0127 14:43:24.523312 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:24 crc kubenswrapper[4698]: I0127 14:43:24.571398 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:24 crc kubenswrapper[4698]: I0127 14:43:24.757452 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:25 crc kubenswrapper[4698]: I0127 14:43:25.836966 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8d5g" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="registry-server" containerID="cri-o://d12a32546e67e87b7d4d8ca457f8fbd397481e04cfaddd5edc43aa83d21aae62" gracePeriod=2 Jan 27 14:43:26 crc kubenswrapper[4698]: I0127 14:43:26.852759 4698 generic.go:334] "Generic (PLEG): container finished" podID="e2d3befe-7722-4089-82c0-17f75d62a419" containerID="d12a32546e67e87b7d4d8ca457f8fbd397481e04cfaddd5edc43aa83d21aae62" exitCode=0 Jan 27 14:43:26 crc kubenswrapper[4698]: I0127 14:43:26.853017 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerDied","Data":"d12a32546e67e87b7d4d8ca457f8fbd397481e04cfaddd5edc43aa83d21aae62"} Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.798427 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.878046 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities\") pod \"e2d3befe-7722-4089-82c0-17f75d62a419\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.878116 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwpx\" (UniqueName: \"kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx\") pod \"e2d3befe-7722-4089-82c0-17f75d62a419\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.878188 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content\") pod \"e2d3befe-7722-4089-82c0-17f75d62a419\" (UID: \"e2d3befe-7722-4089-82c0-17f75d62a419\") " Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.879492 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities" (OuterVolumeSpecName: "utilities") pod "e2d3befe-7722-4089-82c0-17f75d62a419" (UID: "e2d3befe-7722-4089-82c0-17f75d62a419"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.880296 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.886414 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx" (OuterVolumeSpecName: "kube-api-access-6xwpx") pod "e2d3befe-7722-4089-82c0-17f75d62a419" (UID: "e2d3befe-7722-4089-82c0-17f75d62a419"). InnerVolumeSpecName "kube-api-access-6xwpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.899419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8d5g" event={"ID":"e2d3befe-7722-4089-82c0-17f75d62a419","Type":"ContainerDied","Data":"93d4f7badd3a4ea764aeb1bb78763adfab15bfa05987b1a63047eee307f7c25e"} Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.899468 4698 scope.go:117] "RemoveContainer" containerID="d12a32546e67e87b7d4d8ca457f8fbd397481e04cfaddd5edc43aa83d21aae62" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.899576 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8d5g" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.926441 4698 scope.go:117] "RemoveContainer" containerID="8118501d7f2cefaedcba825fd1b5dd7f0a2343f82e4ec8a9db061c2d66bee690" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.969316 4698 scope.go:117] "RemoveContainer" containerID="a6ef77124cc9594a83908fb07d09232f764aca02799ad6143793b7b7fd1f144e" Jan 27 14:43:33 crc kubenswrapper[4698]: I0127 14:43:33.984773 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwpx\" (UniqueName: \"kubernetes.io/projected/e2d3befe-7722-4089-82c0-17f75d62a419-kube-api-access-6xwpx\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.041016 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2d3befe-7722-4089-82c0-17f75d62a419" (UID: "e2d3befe-7722-4089-82c0-17f75d62a419"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.087313 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d3befe-7722-4089-82c0-17f75d62a419-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.232158 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.237364 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8d5g"] Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.917098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" event={"ID":"44a8167e-8dc5-4360-a7b4-623198852230","Type":"ContainerStarted","Data":"9c80346bdf8c8d078aacc6f7f195f674d8903c580f42d476ccf518167999e0a4"} Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.923328 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" event={"ID":"4e0708a8-b876-48dd-8a58-35ba86739ddf","Type":"ContainerStarted","Data":"b1bdf856dc41b03ed0622ece1af6d756573e425c8d5f9ed9a99cf97246936b1b"} Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.931912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" event={"ID":"40335850-9929-4547-8b67-232394389f88","Type":"ContainerStarted","Data":"baff82b52ae3d74a04106a88f1dd55422295a1331ea3256b153fe87aec743ca7"} Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.932063 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.936286 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" event={"ID":"7b638465-7c66-495b-9dfd-1854fea80351","Type":"ContainerStarted","Data":"d06e368a50f8c538e820bc2708862287257f933de89f2cc80441d2ad67e32bce"} Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.936490 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.938324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" event={"ID":"49495d53-72e5-4381-bb5f-efb39b15a87a","Type":"ContainerStarted","Data":"bb2dbeef4a256836c27ffca077b678bb6b9625129dbfa3ab76bb7436820693f7"} Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.939166 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.953024 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-d5twk" podStartSLOduration=2.630368368 podStartE2EDuration="15.953002565s" podCreationTimestamp="2026-01-27 14:43:19 +0000 UTC" firstStartedPulling="2026-01-27 14:43:20.472443567 +0000 UTC m=+856.149221032" lastFinishedPulling="2026-01-27 14:43:33.795077764 +0000 UTC m=+869.471855229" observedRunningTime="2026-01-27 14:43:34.949061551 +0000 UTC m=+870.625839026" watchObservedRunningTime="2026-01-27 14:43:34.953002565 +0000 UTC m=+870.629780030" Jan 27 14:43:34 crc kubenswrapper[4698]: I0127 14:43:34.998270 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7xz26" podStartSLOduration=2.6416914460000003 podStartE2EDuration="15.998251374s" podCreationTimestamp="2026-01-27 14:43:19 +0000 UTC" firstStartedPulling="2026-01-27 14:43:20.403964717 +0000 UTC m=+856.080742182" lastFinishedPulling="2026-01-27 14:43:33.760524645 +0000 UTC m=+869.437302110" observedRunningTime="2026-01-27 14:43:34.966697825 +0000 UTC m=+870.643475300" watchObservedRunningTime="2026-01-27 14:43:34.998251374 +0000 UTC m=+870.675028839" Jan 27 14:43:35 crc kubenswrapper[4698]: I0127 14:43:34.999745 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" path="/var/lib/kubelet/pods/e2d3befe-7722-4089-82c0-17f75d62a419/volumes" Jan 27 14:43:35 crc kubenswrapper[4698]: I0127 14:43:35.035928 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-sxnvw" podStartSLOduration=2.748156283 podStartE2EDuration="16.035909903s" podCreationTimestamp="2026-01-27 14:43:19 +0000 UTC" firstStartedPulling="2026-01-27 14:43:20.519948115 +0000 UTC m=+856.196725580" lastFinishedPulling="2026-01-27 14:43:33.807701745 +0000 UTC m=+869.484479200" observedRunningTime="2026-01-27 14:43:35.035725459 +0000 UTC m=+870.712502924" watchObservedRunningTime="2026-01-27 14:43:35.035909903 +0000 UTC m=+870.712687368" Jan 27 14:43:35 crc kubenswrapper[4698]: I0127 14:43:35.037531 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" podStartSLOduration=1.9234155290000001 podStartE2EDuration="15.037525586s" podCreationTimestamp="2026-01-27 14:43:20 +0000 UTC" firstStartedPulling="2026-01-27 14:43:20.693732312 +0000 UTC m=+856.370509777" lastFinishedPulling="2026-01-27 14:43:33.807842359 +0000 UTC m=+869.484619834" observedRunningTime="2026-01-27 14:43:34.99849328 +0000 UTC m=+870.675270755" watchObservedRunningTime="2026-01-27 14:43:35.037525586 +0000 UTC m=+870.714303051" Jan 27 14:43:35 crc kubenswrapper[4698]: I0127 14:43:35.065241 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-698f689c98-qvbmr" podStartSLOduration=2.6813408990000003 podStartE2EDuration="16.065217034s" podCreationTimestamp="2026-01-27 14:43:19 +0000 UTC" firstStartedPulling="2026-01-27 14:43:20.375477929 +0000 UTC m=+856.052255394" lastFinishedPulling="2026-01-27 14:43:33.759354064 +0000 UTC m=+869.436131529" observedRunningTime="2026-01-27 14:43:35.06012383 +0000 UTC m=+870.736901315" watchObservedRunningTime="2026-01-27 14:43:35.065217034 +0000 UTC m=+870.741994499" Jan 27 14:43:40 crc kubenswrapper[4698]: I0127 14:43:40.439518 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-cl5tl" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.790675 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98"] Jan 27 14:43:58 crc kubenswrapper[4698]: E0127 14:43:58.791442 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="extract-content" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.791456 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="extract-content" Jan 27 14:43:58 crc kubenswrapper[4698]: E0127 14:43:58.791471 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="registry-server" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.791477 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="registry-server" Jan 27 14:43:58 crc kubenswrapper[4698]: E0127 14:43:58.791489 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="extract-utilities" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.791496 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="extract-utilities" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.791607 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d3befe-7722-4089-82c0-17f75d62a419" containerName="registry-server" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.792325 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.794458 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.843619 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98"] Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.907418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.907468 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99p8s\" (UniqueName: \"kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:58 crc kubenswrapper[4698]: I0127 14:43:58.907504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.009115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.009163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99p8s\" (UniqueName: \"kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.009193 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.009754 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.010008 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.028104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99p8s\" (UniqueName: \"kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.108113 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:43:59 crc kubenswrapper[4698]: I0127 14:43:59.316980 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98"] Jan 27 14:44:00 crc kubenswrapper[4698]: I0127 14:44:00.086471 4698 generic.go:334] "Generic (PLEG): container finished" podID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerID="be23c8343538d69bb91cdaad15fff56bf11179b18b04a5bde6228e7e0f51ee7a" exitCode=0 Jan 27 14:44:00 crc kubenswrapper[4698]: I0127 14:44:00.086543 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" event={"ID":"3cbf35ee-4485-4f1e-b68e-aaae5db51c59","Type":"ContainerDied","Data":"be23c8343538d69bb91cdaad15fff56bf11179b18b04a5bde6228e7e0f51ee7a"} Jan 27 14:44:00 crc kubenswrapper[4698]: I0127 14:44:00.086775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" event={"ID":"3cbf35ee-4485-4f1e-b68e-aaae5db51c59","Type":"ContainerStarted","Data":"278c1928aaff6953b662a11937ccda2ef4fea5f6f01e7f94ed1808df7b5a3481"} Jan 27 14:44:10 crc kubenswrapper[4698]: I0127 14:44:10.985918 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4k8p2"] Jan 27 14:44:10 crc kubenswrapper[4698]: I0127 14:44:10.987663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.004324 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8p2"] Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.065481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-catalog-content\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.066230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhgts\" (UniqueName: \"kubernetes.io/projected/585690e7-9038-411e-9d0f-7d74d57e72cd-kube-api-access-vhgts\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.066288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-utilities\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.167913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-catalog-content\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.167966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhgts\" (UniqueName: \"kubernetes.io/projected/585690e7-9038-411e-9d0f-7d74d57e72cd-kube-api-access-vhgts\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.167990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-utilities\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.168494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-catalog-content\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.168596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585690e7-9038-411e-9d0f-7d74d57e72cd-utilities\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.188015 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhgts\" (UniqueName: \"kubernetes.io/projected/585690e7-9038-411e-9d0f-7d74d57e72cd-kube-api-access-vhgts\") pod \"community-operators-4k8p2\" (UID: \"585690e7-9038-411e-9d0f-7d74d57e72cd\") " pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.317659 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:11 crc kubenswrapper[4698]: I0127 14:44:11.556955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8p2"] Jan 27 14:44:12 crc kubenswrapper[4698]: I0127 14:44:12.149892 4698 generic.go:334] "Generic (PLEG): container finished" podID="585690e7-9038-411e-9d0f-7d74d57e72cd" containerID="9672d30100806349b33a66edf391d50ad901887bc5f9ae3ac29d5d961f9bdb38" exitCode=0 Jan 27 14:44:12 crc kubenswrapper[4698]: I0127 14:44:12.150233 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8p2" event={"ID":"585690e7-9038-411e-9d0f-7d74d57e72cd","Type":"ContainerDied","Data":"9672d30100806349b33a66edf391d50ad901887bc5f9ae3ac29d5d961f9bdb38"} Jan 27 14:44:12 crc kubenswrapper[4698]: I0127 14:44:12.150263 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8p2" event={"ID":"585690e7-9038-411e-9d0f-7d74d57e72cd","Type":"ContainerStarted","Data":"50869fe21c2b0a106a6313ad697ed5c203babca9a9c443d9611d2b555dcf573e"} Jan 27 14:44:13 crc kubenswrapper[4698]: I0127 14:44:13.160765 4698 generic.go:334] "Generic (PLEG): container finished" podID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerID="8be8024d994bce1952ca5d05017f86a4652854a0d5325d06f1b180c72b7ee41e" exitCode=0 Jan 27 14:44:13 crc kubenswrapper[4698]: I0127 14:44:13.160809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" event={"ID":"3cbf35ee-4485-4f1e-b68e-aaae5db51c59","Type":"ContainerDied","Data":"8be8024d994bce1952ca5d05017f86a4652854a0d5325d06f1b180c72b7ee41e"} Jan 27 14:44:14 crc kubenswrapper[4698]: I0127 14:44:14.168945 4698 generic.go:334] "Generic (PLEG): container finished" podID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerID="461e5419b240e154c61b5d8103878bcab23c9c031e9f33fc1b72be0fdb96ecb1" exitCode=0 Jan 27 14:44:14 crc kubenswrapper[4698]: I0127 14:44:14.169016 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" event={"ID":"3cbf35ee-4485-4f1e-b68e-aaae5db51c59","Type":"ContainerDied","Data":"461e5419b240e154c61b5d8103878bcab23c9c031e9f33fc1b72be0fdb96ecb1"} Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.071575 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.138879 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle\") pod \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.138968 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util\") pod \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.139007 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99p8s\" (UniqueName: \"kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s\") pod \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\" (UID: \"3cbf35ee-4485-4f1e-b68e-aaae5db51c59\") " Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.140943 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle" (OuterVolumeSpecName: "bundle") pod "3cbf35ee-4485-4f1e-b68e-aaae5db51c59" (UID: "3cbf35ee-4485-4f1e-b68e-aaae5db51c59"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.144760 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s" (OuterVolumeSpecName: "kube-api-access-99p8s") pod "3cbf35ee-4485-4f1e-b68e-aaae5db51c59" (UID: "3cbf35ee-4485-4f1e-b68e-aaae5db51c59"). InnerVolumeSpecName "kube-api-access-99p8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.149596 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util" (OuterVolumeSpecName: "util") pod "3cbf35ee-4485-4f1e-b68e-aaae5db51c59" (UID: "3cbf35ee-4485-4f1e-b68e-aaae5db51c59"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.185077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" event={"ID":"3cbf35ee-4485-4f1e-b68e-aaae5db51c59","Type":"ContainerDied","Data":"278c1928aaff6953b662a11937ccda2ef4fea5f6f01e7f94ed1808df7b5a3481"} Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.185127 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278c1928aaff6953b662a11937ccda2ef4fea5f6f01e7f94ed1808df7b5a3481" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.185140 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.240449 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.240486 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:16 crc kubenswrapper[4698]: I0127 14:44:16.240497 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99p8s\" (UniqueName: \"kubernetes.io/projected/3cbf35ee-4485-4f1e-b68e-aaae5db51c59-kube-api-access-99p8s\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:17 crc kubenswrapper[4698]: I0127 14:44:17.193421 4698 generic.go:334] "Generic (PLEG): container finished" podID="585690e7-9038-411e-9d0f-7d74d57e72cd" containerID="f3ba6bc62b0fdc7d718c665d98cdd89269e1266edfd7a3b703c6bcaa2b1c8aef" exitCode=0 Jan 27 14:44:17 crc kubenswrapper[4698]: I0127 14:44:17.193538 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8p2" event={"ID":"585690e7-9038-411e-9d0f-7d74d57e72cd","Type":"ContainerDied","Data":"f3ba6bc62b0fdc7d718c665d98cdd89269e1266edfd7a3b703c6bcaa2b1c8aef"} Jan 27 14:44:18 crc kubenswrapper[4698]: I0127 14:44:18.203035 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8p2" event={"ID":"585690e7-9038-411e-9d0f-7d74d57e72cd","Type":"ContainerStarted","Data":"1b5345205b4021def22b51471c61a40395bdab2ec02cb98c61577b348f2e4c5b"} Jan 27 14:44:18 crc kubenswrapper[4698]: I0127 14:44:18.226598 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4k8p2" podStartSLOduration=2.776366739 podStartE2EDuration="8.226572684s" podCreationTimestamp="2026-01-27 14:44:10 +0000 UTC" firstStartedPulling="2026-01-27 14:44:12.165951437 +0000 UTC m=+907.842728902" lastFinishedPulling="2026-01-27 14:44:17.616157352 +0000 UTC m=+913.292934847" observedRunningTime="2026-01-27 14:44:18.221676665 +0000 UTC m=+913.898454130" watchObservedRunningTime="2026-01-27 14:44:18.226572684 +0000 UTC m=+913.903350169" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.418311 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-72zp5"] Jan 27 14:44:20 crc kubenswrapper[4698]: E0127 14:44:20.418829 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="extract" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.418841 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="extract" Jan 27 14:44:20 crc kubenswrapper[4698]: E0127 14:44:20.418850 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="util" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.418856 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="util" Jan 27 14:44:20 crc kubenswrapper[4698]: E0127 14:44:20.418867 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="pull" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.418873 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="pull" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.418994 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbf35ee-4485-4f1e-b68e-aaae5db51c59" containerName="extract" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.419390 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.422358 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.422395 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.422601 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6xb4s" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.430541 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-72zp5"] Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.496376 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs9lr\" (UniqueName: \"kubernetes.io/projected/88f47a43-00be-44e2-81e2-519428d78390-kube-api-access-xs9lr\") pod \"nmstate-operator-646758c888-72zp5\" (UID: \"88f47a43-00be-44e2-81e2-519428d78390\") " pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.598059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs9lr\" (UniqueName: \"kubernetes.io/projected/88f47a43-00be-44e2-81e2-519428d78390-kube-api-access-xs9lr\") pod \"nmstate-operator-646758c888-72zp5\" (UID: \"88f47a43-00be-44e2-81e2-519428d78390\") " pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" Jan 27 14:44:20 crc kubenswrapper[4698]: I0127 14:44:20.623117 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs9lr\" (UniqueName: \"kubernetes.io/projected/88f47a43-00be-44e2-81e2-519428d78390-kube-api-access-xs9lr\") pod \"nmstate-operator-646758c888-72zp5\" (UID: \"88f47a43-00be-44e2-81e2-519428d78390\") " pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:20.736684 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:21.318619 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:21.318947 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:21.399997 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:22.260973 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4k8p2" Jan 27 14:44:22 crc kubenswrapper[4698]: I0127 14:44:22.928727 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-72zp5"] Jan 27 14:44:23 crc kubenswrapper[4698]: I0127 14:44:23.229457 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" event={"ID":"88f47a43-00be-44e2-81e2-519428d78390","Type":"ContainerStarted","Data":"4a92cf30e3d075311aab0bb158f150f446d7772cc39aa9d770d26a4912d7760e"} Jan 27 14:44:23 crc kubenswrapper[4698]: I0127 14:44:23.274493 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8p2"] Jan 27 14:44:23 crc kubenswrapper[4698]: I0127 14:44:23.636888 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:44:23 crc kubenswrapper[4698]: I0127 14:44:23.637222 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kvvv" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="registry-server" containerID="cri-o://7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc" gracePeriod=2 Jan 27 14:44:23 crc kubenswrapper[4698]: I0127 14:44:23.966794 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.046155 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfsxl\" (UniqueName: \"kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl\") pod \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.046518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities\") pod \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.046574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content\") pod \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\" (UID: \"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674\") " Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.055720 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities" (OuterVolumeSpecName: "utilities") pod "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" (UID: "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.061058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl" (OuterVolumeSpecName: "kube-api-access-nfsxl") pod "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" (UID: "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674"). InnerVolumeSpecName "kube-api-access-nfsxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.107063 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" (UID: "c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.147797 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.147827 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.147873 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfsxl\" (UniqueName: \"kubernetes.io/projected/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674-kube-api-access-nfsxl\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.237178 4698 generic.go:334] "Generic (PLEG): container finished" podID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerID="7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc" exitCode=0 Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.238159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerDied","Data":"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc"} Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.238180 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kvvv" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.238226 4698 scope.go:117] "RemoveContainer" containerID="7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.238212 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kvvv" event={"ID":"c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674","Type":"ContainerDied","Data":"54e9d70336ebbb79259947b1686c17207818558b1b9aed7b7d9dd0e5ea1a48cb"} Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.261121 4698 scope.go:117] "RemoveContainer" containerID="ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.266082 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.273267 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kvvv"] Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.291781 4698 scope.go:117] "RemoveContainer" containerID="ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.312926 4698 scope.go:117] "RemoveContainer" containerID="7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc" Jan 27 14:44:24 crc kubenswrapper[4698]: E0127 14:44:24.313671 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc\": container with ID starting with 7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc not found: ID does not exist" containerID="7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.313851 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc"} err="failed to get container status \"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc\": rpc error: code = NotFound desc = could not find container \"7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc\": container with ID starting with 7c564ebf8b12c16bb058ad940661a8c549700671dd641d9507496784c48013bc not found: ID does not exist" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.313935 4698 scope.go:117] "RemoveContainer" containerID="ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d" Jan 27 14:44:24 crc kubenswrapper[4698]: E0127 14:44:24.314274 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d\": container with ID starting with ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d not found: ID does not exist" containerID="ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.314317 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d"} err="failed to get container status \"ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d\": rpc error: code = NotFound desc = could not find container \"ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d\": container with ID starting with ca3853b3393d2b1e77c50bbfe6abf51c9dc9ecba4b2de46b9175ef723558f63d not found: ID does not exist" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.314349 4698 scope.go:117] "RemoveContainer" containerID="ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f" Jan 27 14:44:24 crc kubenswrapper[4698]: E0127 14:44:24.314675 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f\": container with ID starting with ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f not found: ID does not exist" containerID="ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.314764 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f"} err="failed to get container status \"ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f\": rpc error: code = NotFound desc = could not find container \"ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f\": container with ID starting with ce27bbd964142f1f80ccebb0e6943b9a284ce9b7b7e32190d1226ee6d65e576f not found: ID does not exist" Jan 27 14:44:24 crc kubenswrapper[4698]: I0127 14:44:24.998331 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" path="/var/lib/kubelet/pods/c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674/volumes" Jan 27 14:44:26 crc kubenswrapper[4698]: I0127 14:44:26.249130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" event={"ID":"88f47a43-00be-44e2-81e2-519428d78390","Type":"ContainerStarted","Data":"ffcbc167418c9921968b1d5e2a53827c11fa92679d1ca8e54fbab3dfa3a38b1d"} Jan 27 14:44:26 crc kubenswrapper[4698]: I0127 14:44:26.266874 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-72zp5" podStartSLOduration=3.274549488 podStartE2EDuration="6.266857108s" podCreationTimestamp="2026-01-27 14:44:20 +0000 UTC" firstStartedPulling="2026-01-27 14:44:22.937992013 +0000 UTC m=+918.614769478" lastFinishedPulling="2026-01-27 14:44:25.930299633 +0000 UTC m=+921.607077098" observedRunningTime="2026-01-27 14:44:26.265384809 +0000 UTC m=+921.942162314" watchObservedRunningTime="2026-01-27 14:44:26.266857108 +0000 UTC m=+921.943634573" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.271779 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dwhm6"] Jan 27 14:44:30 crc kubenswrapper[4698]: E0127 14:44:30.272243 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="registry-server" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.272253 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="registry-server" Jan 27 14:44:30 crc kubenswrapper[4698]: E0127 14:44:30.272266 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="extract-utilities" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.272272 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="extract-utilities" Jan 27 14:44:30 crc kubenswrapper[4698]: E0127 14:44:30.272285 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="extract-content" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.272291 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="extract-content" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.272383 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c9ebc3-a042-4277-a3d4-f4b1f1c6a674" containerName="registry-server" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.272934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.274584 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nrmrq" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.290782 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dwhm6"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.293539 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.294428 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.298870 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.305444 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xcrf4"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.306249 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.317570 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.318915 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nz4\" (UniqueName: \"kubernetes.io/projected/0a0f5be8-0f28-4f33-8a36-7b0712476000-kube-api-access-q7nz4\") pod \"nmstate-metrics-54757c584b-dwhm6\" (UID: \"0a0f5be8-0f28-4f33-8a36-7b0712476000\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420016 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7nz4\" (UniqueName: \"kubernetes.io/projected/0a0f5be8-0f28-4f33-8a36-7b0712476000-kube-api-access-q7nz4\") pod \"nmstate-metrics-54757c584b-dwhm6\" (UID: \"0a0f5be8-0f28-4f33-8a36-7b0712476000\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4hk\" (UniqueName: \"kubernetes.io/projected/413d3c2f-6ec7-4518-acbd-f811d0d54675-kube-api-access-9z4hk\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420135 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-nmstate-lock\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420159 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-dbus-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420183 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-ovs-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.420201 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ppn\" (UniqueName: \"kubernetes.io/projected/3964a93e-63fc-403a-875a-17ca1f14436e-kube-api-access-x6ppn\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.450913 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7nz4\" (UniqueName: \"kubernetes.io/projected/0a0f5be8-0f28-4f33-8a36-7b0712476000-kube-api-access-q7nz4\") pod \"nmstate-metrics-54757c584b-dwhm6\" (UID: \"0a0f5be8-0f28-4f33-8a36-7b0712476000\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.455291 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.463588 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.467969 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.468055 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f944m" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.476009 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.486970 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521317 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-nmstate-lock\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkc97\" (UniqueName: \"kubernetes.io/projected/1527e700-f26f-4281-a493-416b4e0ca5f9-kube-api-access-kkc97\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-dbus-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521412 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1527e700-f26f-4281-a493-416b4e0ca5f9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521432 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-ovs-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ppn\" (UniqueName: \"kubernetes.io/projected/3964a93e-63fc-403a-875a-17ca1f14436e-kube-api-access-x6ppn\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-nmstate-lock\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1527e700-f26f-4281-a493-416b4e0ca5f9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-ovs-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521731 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3964a93e-63fc-403a-875a-17ca1f14436e-dbus-socket\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.521854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4hk\" (UniqueName: \"kubernetes.io/projected/413d3c2f-6ec7-4518-acbd-f811d0d54675-kube-api-access-9z4hk\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: E0127 14:44:30.521911 4698 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 14:44:30 crc kubenswrapper[4698]: E0127 14:44:30.521968 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair podName:413d3c2f-6ec7-4518-acbd-f811d0d54675 nodeName:}" failed. No retries permitted until 2026-01-27 14:44:31.021947135 +0000 UTC m=+926.698724600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-pbvrj" (UID: "413d3c2f-6ec7-4518-acbd-f811d0d54675") : secret "openshift-nmstate-webhook" not found Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.541604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4hk\" (UniqueName: \"kubernetes.io/projected/413d3c2f-6ec7-4518-acbd-f811d0d54675-kube-api-access-9z4hk\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.541765 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ppn\" (UniqueName: \"kubernetes.io/projected/3964a93e-63fc-403a-875a-17ca1f14436e-kube-api-access-x6ppn\") pod \"nmstate-handler-xcrf4\" (UID: \"3964a93e-63fc-403a-875a-17ca1f14436e\") " pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.616787 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f576cdc9b-chrq8"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.617529 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.623876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkc97\" (UniqueName: \"kubernetes.io/projected/1527e700-f26f-4281-a493-416b4e0ca5f9-kube-api-access-kkc97\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.623923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1527e700-f26f-4281-a493-416b4e0ca5f9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.623975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1527e700-f26f-4281-a493-416b4e0ca5f9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.625112 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1527e700-f26f-4281-a493-416b4e0ca5f9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.629494 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.631220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1527e700-f26f-4281-a493-416b4e0ca5f9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.637200 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f576cdc9b-chrq8"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.652533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkc97\" (UniqueName: \"kubernetes.io/projected/1527e700-f26f-4281-a493-416b4e0ca5f9-kube-api-access-kkc97\") pod \"nmstate-console-plugin-7754f76f8b-wpr5c\" (UID: \"1527e700-f26f-4281-a493-416b4e0ca5f9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.658996 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.725964 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726218 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jz5\" (UniqueName: \"kubernetes.io/projected/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-kube-api-access-r7jz5\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-oauth-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-service-ca\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-trusted-ca-bundle\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.726610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-oauth-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.798837 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.827700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.827786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jz5\" (UniqueName: \"kubernetes.io/projected/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-kube-api-access-r7jz5\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.827872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-oauth-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.828817 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.829795 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-service-ca\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.830006 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.828855 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-service-ca\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.830126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-trusted-ca-bundle\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.830973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-oauth-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.831708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-trusted-ca-bundle\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.832099 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-oauth-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.836447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-oauth-config\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.836925 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-console-serving-cert\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.839016 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dwhm6"] Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.847010 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jz5\" (UniqueName: \"kubernetes.io/projected/f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0-kube-api-access-r7jz5\") pod \"console-f576cdc9b-chrq8\" (UID: \"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0\") " pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.967982 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c"] Jan 27 14:44:30 crc kubenswrapper[4698]: W0127 14:44:30.971709 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1527e700_f26f_4281_a493_416b4e0ca5f9.slice/crio-17424cabaf2ebe6aaf32e68a09645f51f5c33bf8356e7fd62a3354e6b52a9978 WatchSource:0}: Error finding container 17424cabaf2ebe6aaf32e68a09645f51f5c33bf8356e7fd62a3354e6b52a9978: Status 404 returned error can't find the container with id 17424cabaf2ebe6aaf32e68a09645f51f5c33bf8356e7fd62a3354e6b52a9978 Jan 27 14:44:30 crc kubenswrapper[4698]: I0127 14:44:30.977427 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.038391 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.041762 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/413d3c2f-6ec7-4518-acbd-f811d0d54675-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pbvrj\" (UID: \"413d3c2f-6ec7-4518-acbd-f811d0d54675\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.152980 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f576cdc9b-chrq8"] Jan 27 14:44:31 crc kubenswrapper[4698]: W0127 14:44:31.157189 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f1a88d_58b2_4ac1_98d9_1b9409dc28d0.slice/crio-ac0d4d0fe83249875fb15c663c8d8d2803622af3b3362b871d58828a4fbc2024 WatchSource:0}: Error finding container ac0d4d0fe83249875fb15c663c8d8d2803622af3b3362b871d58828a4fbc2024: Status 404 returned error can't find the container with id ac0d4d0fe83249875fb15c663c8d8d2803622af3b3362b871d58828a4fbc2024 Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.268063 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.278009 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f576cdc9b-chrq8" event={"ID":"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0","Type":"ContainerStarted","Data":"e6e9e009e7a0db63a4b4c424e473a84bce654a81188192345b7fb6a9078b896a"} Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.278054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f576cdc9b-chrq8" event={"ID":"f1f1a88d-58b2-4ac1-98d9-1b9409dc28d0","Type":"ContainerStarted","Data":"ac0d4d0fe83249875fb15c663c8d8d2803622af3b3362b871d58828a4fbc2024"} Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.280904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" event={"ID":"0a0f5be8-0f28-4f33-8a36-7b0712476000","Type":"ContainerStarted","Data":"00dec96a93658534e87d9a68c32e7c5b02804f2e062ee0fdce7457b0b2c5cf14"} Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.285801 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" event={"ID":"1527e700-f26f-4281-a493-416b4e0ca5f9","Type":"ContainerStarted","Data":"17424cabaf2ebe6aaf32e68a09645f51f5c33bf8356e7fd62a3354e6b52a9978"} Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.285832 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xcrf4" event={"ID":"3964a93e-63fc-403a-875a-17ca1f14436e","Type":"ContainerStarted","Data":"df77b113dc4e40b247a0d0d9b66b0a3a097581f9f932d0ffd961f8ba28459ac9"} Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.294301 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f576cdc9b-chrq8" podStartSLOduration=1.294284043 podStartE2EDuration="1.294284043s" podCreationTimestamp="2026-01-27 14:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:44:31.291951012 +0000 UTC m=+926.968728477" watchObservedRunningTime="2026-01-27 14:44:31.294284043 +0000 UTC m=+926.971061508" Jan 27 14:44:31 crc kubenswrapper[4698]: I0127 14:44:31.502797 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj"] Jan 27 14:44:32 crc kubenswrapper[4698]: I0127 14:44:32.288968 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" event={"ID":"413d3c2f-6ec7-4518-acbd-f811d0d54675","Type":"ContainerStarted","Data":"b0edc242b0a988d6870908e3c1d181d6dce7fb8c2af13bad3ee9316d26947090"} Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.310760 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" event={"ID":"1527e700-f26f-4281-a493-416b4e0ca5f9","Type":"ContainerStarted","Data":"b1992872a5272f753fdf630315bfe38b5c55b26d683f2c13a4d95107abc7027d"} Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.313157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xcrf4" event={"ID":"3964a93e-63fc-403a-875a-17ca1f14436e","Type":"ContainerStarted","Data":"d448b9a3b15a462582da010b31ac6c84d9e8bb4e35f1340767471d7e66b457b8"} Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.313267 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.314570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" event={"ID":"0a0f5be8-0f28-4f33-8a36-7b0712476000","Type":"ContainerStarted","Data":"02ec3c43895da20e192159453c6e3642a11b858c36d90230b8c8a9bf38d60ce1"} Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.316858 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" event={"ID":"413d3c2f-6ec7-4518-acbd-f811d0d54675","Type":"ContainerStarted","Data":"12e7c3f6454c42b8c0f62f1a083511e55b9ad568dc8b52fd660f82043642ebbe"} Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.317233 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.333491 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wpr5c" podStartSLOduration=1.996136118 podStartE2EDuration="5.333470355s" podCreationTimestamp="2026-01-27 14:44:30 +0000 UTC" firstStartedPulling="2026-01-27 14:44:30.973575745 +0000 UTC m=+926.650353210" lastFinishedPulling="2026-01-27 14:44:34.310909982 +0000 UTC m=+929.987687447" observedRunningTime="2026-01-27 14:44:35.329480111 +0000 UTC m=+931.006257586" watchObservedRunningTime="2026-01-27 14:44:35.333470355 +0000 UTC m=+931.010247820" Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.351302 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" podStartSLOduration=2.532766681 podStartE2EDuration="5.351281223s" podCreationTimestamp="2026-01-27 14:44:30 +0000 UTC" firstStartedPulling="2026-01-27 14:44:31.51028011 +0000 UTC m=+927.187057575" lastFinishedPulling="2026-01-27 14:44:34.328794652 +0000 UTC m=+930.005572117" observedRunningTime="2026-01-27 14:44:35.344996308 +0000 UTC m=+931.021773793" watchObservedRunningTime="2026-01-27 14:44:35.351281223 +0000 UTC m=+931.028058698" Jan 27 14:44:35 crc kubenswrapper[4698]: I0127 14:44:35.365279 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xcrf4" podStartSLOduration=1.751458647 podStartE2EDuration="5.36525732s" podCreationTimestamp="2026-01-27 14:44:30 +0000 UTC" firstStartedPulling="2026-01-27 14:44:30.703514057 +0000 UTC m=+926.380291522" lastFinishedPulling="2026-01-27 14:44:34.31731273 +0000 UTC m=+929.994090195" observedRunningTime="2026-01-27 14:44:35.361771309 +0000 UTC m=+931.038548774" watchObservedRunningTime="2026-01-27 14:44:35.36525732 +0000 UTC m=+931.042034785" Jan 27 14:44:38 crc kubenswrapper[4698]: I0127 14:44:38.338200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" event={"ID":"0a0f5be8-0f28-4f33-8a36-7b0712476000","Type":"ContainerStarted","Data":"3003e9f4ad73e5f8d7b8dd7772d89036f7903d65453a66c65db969aaccbe8822"} Jan 27 14:44:38 crc kubenswrapper[4698]: I0127 14:44:38.355063 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dwhm6" podStartSLOduration=1.747657388 podStartE2EDuration="8.355040374s" podCreationTimestamp="2026-01-27 14:44:30 +0000 UTC" firstStartedPulling="2026-01-27 14:44:30.846887095 +0000 UTC m=+926.523664560" lastFinishedPulling="2026-01-27 14:44:37.454270081 +0000 UTC m=+933.131047546" observedRunningTime="2026-01-27 14:44:38.352380744 +0000 UTC m=+934.029158219" watchObservedRunningTime="2026-01-27 14:44:38.355040374 +0000 UTC m=+934.031817839" Jan 27 14:44:40 crc kubenswrapper[4698]: I0127 14:44:40.681157 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xcrf4" Jan 27 14:44:40 crc kubenswrapper[4698]: I0127 14:44:40.978068 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:40 crc kubenswrapper[4698]: I0127 14:44:40.978129 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:40 crc kubenswrapper[4698]: I0127 14:44:40.984039 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.357471 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f576cdc9b-chrq8" Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.420451 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.848965 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.850159 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.861376 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.998334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.998568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:41 crc kubenswrapper[4698]: I0127 14:44:41.998728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxls\" (UniqueName: \"kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.100465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhxls\" (UniqueName: \"kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.100535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.100694 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.101207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.101590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.119339 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhxls\" (UniqueName: \"kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls\") pod \"redhat-marketplace-6mq7h\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.177248 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:42 crc kubenswrapper[4698]: I0127 14:44:42.405375 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:42 crc kubenswrapper[4698]: W0127 14:44:42.411001 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52dd287f_004a_4d6e_b2ec_af25462d967c.slice/crio-13d5f9df97b12220ad7845378b10fb722e94cf6c811e0813c5e68b5b53bfa1a0 WatchSource:0}: Error finding container 13d5f9df97b12220ad7845378b10fb722e94cf6c811e0813c5e68b5b53bfa1a0: Status 404 returned error can't find the container with id 13d5f9df97b12220ad7845378b10fb722e94cf6c811e0813c5e68b5b53bfa1a0 Jan 27 14:44:43 crc kubenswrapper[4698]: I0127 14:44:43.368438 4698 generic.go:334] "Generic (PLEG): container finished" podID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerID="0d8693a8f2a46242740742663b4d2be7f759be0d39f3162407dc25b90c8c57f4" exitCode=0 Jan 27 14:44:43 crc kubenswrapper[4698]: I0127 14:44:43.368500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerDied","Data":"0d8693a8f2a46242740742663b4d2be7f759be0d39f3162407dc25b90c8c57f4"} Jan 27 14:44:43 crc kubenswrapper[4698]: I0127 14:44:43.368775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerStarted","Data":"13d5f9df97b12220ad7845378b10fb722e94cf6c811e0813c5e68b5b53bfa1a0"} Jan 27 14:44:44 crc kubenswrapper[4698]: I0127 14:44:44.377511 4698 generic.go:334] "Generic (PLEG): container finished" podID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerID="aaeffa0a08707af3bf212db0eb1aaccc2bb0570fde52d08a788448f0e9309383" exitCode=0 Jan 27 14:44:44 crc kubenswrapper[4698]: I0127 14:44:44.377731 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerDied","Data":"aaeffa0a08707af3bf212db0eb1aaccc2bb0570fde52d08a788448f0e9309383"} Jan 27 14:44:45 crc kubenswrapper[4698]: I0127 14:44:45.384035 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerStarted","Data":"2bc230d21b88c92b6f7ec35255f93f868034c923045fc95af4d4ec3d2f030e91"} Jan 27 14:44:45 crc kubenswrapper[4698]: I0127 14:44:45.403792 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mq7h" podStartSLOduration=2.758935243 podStartE2EDuration="4.40377366s" podCreationTimestamp="2026-01-27 14:44:41 +0000 UTC" firstStartedPulling="2026-01-27 14:44:43.370201146 +0000 UTC m=+939.046978611" lastFinishedPulling="2026-01-27 14:44:45.015039563 +0000 UTC m=+940.691817028" observedRunningTime="2026-01-27 14:44:45.399069776 +0000 UTC m=+941.075847241" watchObservedRunningTime="2026-01-27 14:44:45.40377366 +0000 UTC m=+941.080551125" Jan 27 14:44:51 crc kubenswrapper[4698]: I0127 14:44:51.272933 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pbvrj" Jan 27 14:44:52 crc kubenswrapper[4698]: I0127 14:44:52.178285 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:52 crc kubenswrapper[4698]: I0127 14:44:52.178900 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:52 crc kubenswrapper[4698]: I0127 14:44:52.219321 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:52 crc kubenswrapper[4698]: I0127 14:44:52.466626 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:54 crc kubenswrapper[4698]: I0127 14:44:54.638125 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:54 crc kubenswrapper[4698]: I0127 14:44:54.638407 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mq7h" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="registry-server" containerID="cri-o://2bc230d21b88c92b6f7ec35255f93f868034c923045fc95af4d4ec3d2f030e91" gracePeriod=2 Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.442335 4698 generic.go:334] "Generic (PLEG): container finished" podID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerID="2bc230d21b88c92b6f7ec35255f93f868034c923045fc95af4d4ec3d2f030e91" exitCode=0 Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.442417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerDied","Data":"2bc230d21b88c92b6f7ec35255f93f868034c923045fc95af4d4ec3d2f030e91"} Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.604466 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.688624 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities\") pod \"52dd287f-004a-4d6e-b2ec-af25462d967c\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.688714 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content\") pod \"52dd287f-004a-4d6e-b2ec-af25462d967c\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.688765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxls\" (UniqueName: \"kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls\") pod \"52dd287f-004a-4d6e-b2ec-af25462d967c\" (UID: \"52dd287f-004a-4d6e-b2ec-af25462d967c\") " Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.690374 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities" (OuterVolumeSpecName: "utilities") pod "52dd287f-004a-4d6e-b2ec-af25462d967c" (UID: "52dd287f-004a-4d6e-b2ec-af25462d967c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.696254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls" (OuterVolumeSpecName: "kube-api-access-rhxls") pod "52dd287f-004a-4d6e-b2ec-af25462d967c" (UID: "52dd287f-004a-4d6e-b2ec-af25462d967c"). InnerVolumeSpecName "kube-api-access-rhxls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.710839 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52dd287f-004a-4d6e-b2ec-af25462d967c" (UID: "52dd287f-004a-4d6e-b2ec-af25462d967c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.789921 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.789977 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52dd287f-004a-4d6e-b2ec-af25462d967c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:55 crc kubenswrapper[4698]: I0127 14:44:55.789994 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhxls\" (UniqueName: \"kubernetes.io/projected/52dd287f-004a-4d6e-b2ec-af25462d967c-kube-api-access-rhxls\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.450267 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mq7h" event={"ID":"52dd287f-004a-4d6e-b2ec-af25462d967c","Type":"ContainerDied","Data":"13d5f9df97b12220ad7845378b10fb722e94cf6c811e0813c5e68b5b53bfa1a0"} Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.450323 4698 scope.go:117] "RemoveContainer" containerID="2bc230d21b88c92b6f7ec35255f93f868034c923045fc95af4d4ec3d2f030e91" Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.450332 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mq7h" Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.474605 4698 scope.go:117] "RemoveContainer" containerID="aaeffa0a08707af3bf212db0eb1aaccc2bb0570fde52d08a788448f0e9309383" Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.486732 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.494327 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mq7h"] Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.504296 4698 scope.go:117] "RemoveContainer" containerID="0d8693a8f2a46242740742663b4d2be7f759be0d39f3162407dc25b90c8c57f4" Jan 27 14:44:56 crc kubenswrapper[4698]: I0127 14:44:56.998980 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" path="/var/lib/kubelet/pods/52dd287f-004a-4d6e-b2ec-af25462d967c/volumes" Jan 27 14:44:57 crc kubenswrapper[4698]: I0127 14:44:57.451617 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:44:57 crc kubenswrapper[4698]: I0127 14:44:57.452067 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.160602 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp"] Jan 27 14:45:00 crc kubenswrapper[4698]: E0127 14:45:00.161577 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.161594 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4698]: E0127 14:45:00.161615 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.161624 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="extract-utilities" Jan 27 14:45:00 crc kubenswrapper[4698]: E0127 14:45:00.161653 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.161664 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="extract-content" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.161845 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dd287f-004a-4d6e-b2ec-af25462d967c" containerName="registry-server" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.162385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.165663 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.165663 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.165989 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp"] Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.263541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvj5f\" (UniqueName: \"kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.263592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.263737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.364487 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvj5f\" (UniqueName: \"kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.364540 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.364591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.366100 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.370582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.387812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvj5f\" (UniqueName: \"kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f\") pod \"collect-profiles-29492085-hstxp\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.485839 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:00 crc kubenswrapper[4698]: I0127 14:45:00.973724 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp"] Jan 27 14:45:00 crc kubenswrapper[4698]: W0127 14:45:00.977838 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd605254d_214f_423e_a9d6_504e1c8ccf43.slice/crio-42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394 WatchSource:0}: Error finding container 42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394: Status 404 returned error can't find the container with id 42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394 Jan 27 14:45:01 crc kubenswrapper[4698]: I0127 14:45:01.482958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" event={"ID":"d605254d-214f-423e-a9d6-504e1c8ccf43","Type":"ContainerStarted","Data":"c53d8b7433a4b58bb9c5554f564e96ed81a8af00131944699e1dd827b5d5a81c"} Jan 27 14:45:01 crc kubenswrapper[4698]: I0127 14:45:01.483298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" event={"ID":"d605254d-214f-423e-a9d6-504e1c8ccf43","Type":"ContainerStarted","Data":"42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394"} Jan 27 14:45:01 crc kubenswrapper[4698]: I0127 14:45:01.501889 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" podStartSLOduration=1.501872578 podStartE2EDuration="1.501872578s" podCreationTimestamp="2026-01-27 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:45:01.497482173 +0000 UTC m=+957.174259638" watchObservedRunningTime="2026-01-27 14:45:01.501872578 +0000 UTC m=+957.178650043" Jan 27 14:45:02 crc kubenswrapper[4698]: I0127 14:45:02.495334 4698 generic.go:334] "Generic (PLEG): container finished" podID="d605254d-214f-423e-a9d6-504e1c8ccf43" containerID="c53d8b7433a4b58bb9c5554f564e96ed81a8af00131944699e1dd827b5d5a81c" exitCode=0 Jan 27 14:45:02 crc kubenswrapper[4698]: I0127 14:45:02.495412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" event={"ID":"d605254d-214f-423e-a9d6-504e1c8ccf43","Type":"ContainerDied","Data":"c53d8b7433a4b58bb9c5554f564e96ed81a8af00131944699e1dd827b5d5a81c"} Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.804332 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.917839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume\") pod \"d605254d-214f-423e-a9d6-504e1c8ccf43\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.918224 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume\") pod \"d605254d-214f-423e-a9d6-504e1c8ccf43\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.918513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvj5f\" (UniqueName: \"kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f\") pod \"d605254d-214f-423e-a9d6-504e1c8ccf43\" (UID: \"d605254d-214f-423e-a9d6-504e1c8ccf43\") " Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.919026 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume" (OuterVolumeSpecName: "config-volume") pod "d605254d-214f-423e-a9d6-504e1c8ccf43" (UID: "d605254d-214f-423e-a9d6-504e1c8ccf43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.929813 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d605254d-214f-423e-a9d6-504e1c8ccf43" (UID: "d605254d-214f-423e-a9d6-504e1c8ccf43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4698]: I0127 14:45:03.929885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f" (OuterVolumeSpecName: "kube-api-access-kvj5f") pod "d605254d-214f-423e-a9d6-504e1c8ccf43" (UID: "d605254d-214f-423e-a9d6-504e1c8ccf43"). InnerVolumeSpecName "kube-api-access-kvj5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.021847 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d605254d-214f-423e-a9d6-504e1c8ccf43-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.022147 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d605254d-214f-423e-a9d6-504e1c8ccf43-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.022162 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvj5f\" (UniqueName: \"kubernetes.io/projected/d605254d-214f-423e-a9d6-504e1c8ccf43-kube-api-access-kvj5f\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.507834 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" event={"ID":"d605254d-214f-423e-a9d6-504e1c8ccf43","Type":"ContainerDied","Data":"42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394"} Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.508144 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c60ccfc58cfe84759ff28e0c1f695e0654c3130ceabc8ada9e0ad6c794d394" Jan 27 14:45:04 crc kubenswrapper[4698]: I0127 14:45:04.507891 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.464517 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cvnrn" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" containerID="cri-o://f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa" gracePeriod=15 Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.807652 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6"] Jan 27 14:45:06 crc kubenswrapper[4698]: E0127 14:45:06.807959 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d605254d-214f-423e-a9d6-504e1c8ccf43" containerName="collect-profiles" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.807974 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d605254d-214f-423e-a9d6-504e1c8ccf43" containerName="collect-profiles" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.808111 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d605254d-214f-423e-a9d6-504e1c8ccf43" containerName="collect-profiles" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.809024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.814711 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.828931 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6"] Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.966708 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.967091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzdp\" (UniqueName: \"kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.967153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.981947 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cvnrn_12b42d9a-df65-4a89-8961-1fa7f9b8a14b/console/0.log" Jan 27 14:45:06 crc kubenswrapper[4698]: I0127 14:45:06.982035 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.068921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.069004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.069045 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzdp\" (UniqueName: \"kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.069775 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.069980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.098013 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzdp\" (UniqueName: \"kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.150258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.170364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.170605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171281 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171423 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171525 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhq7\" (UniqueName: \"kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7\") pod \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\" (UID: \"12b42d9a-df65-4a89-8961-1fa7f9b8a14b\") " Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171301 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config" (OuterVolumeSpecName: "console-config") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171569 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171802 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.171955 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca" (OuterVolumeSpecName: "service-ca") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.175338 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.176122 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.176145 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7" (OuterVolumeSpecName: "kube-api-access-2qhq7") pod "12b42d9a-df65-4a89-8961-1fa7f9b8a14b" (UID: "12b42d9a-df65-4a89-8961-1fa7f9b8a14b"). InnerVolumeSpecName "kube-api-access-2qhq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273592 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273667 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273682 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273694 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273709 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qhq7\" (UniqueName: \"kubernetes.io/projected/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-kube-api-access-2qhq7\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273723 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.273735 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/12b42d9a-df65-4a89-8961-1fa7f9b8a14b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.395980 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6"] Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.526713 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cvnrn_12b42d9a-df65-4a89-8961-1fa7f9b8a14b/console/0.log" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.527150 4698 generic.go:334] "Generic (PLEG): container finished" podID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerID="f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa" exitCode=2 Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.527216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cvnrn" event={"ID":"12b42d9a-df65-4a89-8961-1fa7f9b8a14b","Type":"ContainerDied","Data":"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa"} Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.527240 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cvnrn" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.527261 4698 scope.go:117] "RemoveContainer" containerID="f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.527248 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cvnrn" event={"ID":"12b42d9a-df65-4a89-8961-1fa7f9b8a14b","Type":"ContainerDied","Data":"00f26de89736c3a679b65fcf1748b241a924eb949e5a0c11bdda3aee6a834f9f"} Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.529497 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" event={"ID":"53da7be9-df28-4c12-ba5f-1c7db24893d3","Type":"ContainerStarted","Data":"ef20d5d3798996110e38e15a27b78d9a330ad2c727c4b53799e8d5e4e0c6342c"} Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.548165 4698 scope.go:117] "RemoveContainer" containerID="f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa" Jan 27 14:45:07 crc kubenswrapper[4698]: E0127 14:45:07.549472 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa\": container with ID starting with f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa not found: ID does not exist" containerID="f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.549544 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa"} err="failed to get container status \"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa\": rpc error: code = NotFound desc = could not find container \"f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa\": container with ID starting with f590f042a61cb4bb1e5cbc1bf459326c05797a4ff6bf4c98b941727c8f6d7caa not found: ID does not exist" Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.556055 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:45:07 crc kubenswrapper[4698]: I0127 14:45:07.560992 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cvnrn"] Jan 27 14:45:08 crc kubenswrapper[4698]: I0127 14:45:08.541621 4698 generic.go:334] "Generic (PLEG): container finished" podID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerID="8620163b089eefa2bc73acd33c4100eb9e99b72051322d254c99ea465f6ca2e4" exitCode=0 Jan 27 14:45:08 crc kubenswrapper[4698]: I0127 14:45:08.541694 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" event={"ID":"53da7be9-df28-4c12-ba5f-1c7db24893d3","Type":"ContainerDied","Data":"8620163b089eefa2bc73acd33c4100eb9e99b72051322d254c99ea465f6ca2e4"} Jan 27 14:45:08 crc kubenswrapper[4698]: I0127 14:45:08.998830 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" path="/var/lib/kubelet/pods/12b42d9a-df65-4a89-8961-1fa7f9b8a14b/volumes" Jan 27 14:45:10 crc kubenswrapper[4698]: I0127 14:45:10.555917 4698 generic.go:334] "Generic (PLEG): container finished" podID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerID="f91676b5c53bacbb79e58ae8904d47c9bf83cd36a33f0c014d7e785fe424cd1e" exitCode=0 Jan 27 14:45:10 crc kubenswrapper[4698]: I0127 14:45:10.556261 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" event={"ID":"53da7be9-df28-4c12-ba5f-1c7db24893d3","Type":"ContainerDied","Data":"f91676b5c53bacbb79e58ae8904d47c9bf83cd36a33f0c014d7e785fe424cd1e"} Jan 27 14:45:11 crc kubenswrapper[4698]: I0127 14:45:11.564369 4698 generic.go:334] "Generic (PLEG): container finished" podID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerID="1ef8f0052a77a714ad863ae593148600890898cd113b6f4da5a14104720083f7" exitCode=0 Jan 27 14:45:11 crc kubenswrapper[4698]: I0127 14:45:11.564421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" event={"ID":"53da7be9-df28-4c12-ba5f-1c7db24893d3","Type":"ContainerDied","Data":"1ef8f0052a77a714ad863ae593148600890898cd113b6f4da5a14104720083f7"} Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.803865 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.949719 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle\") pod \"53da7be9-df28-4c12-ba5f-1c7db24893d3\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.949817 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzdp\" (UniqueName: \"kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp\") pod \"53da7be9-df28-4c12-ba5f-1c7db24893d3\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.949885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util\") pod \"53da7be9-df28-4c12-ba5f-1c7db24893d3\" (UID: \"53da7be9-df28-4c12-ba5f-1c7db24893d3\") " Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.950977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle" (OuterVolumeSpecName: "bundle") pod "53da7be9-df28-4c12-ba5f-1c7db24893d3" (UID: "53da7be9-df28-4c12-ba5f-1c7db24893d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:12 crc kubenswrapper[4698]: I0127 14:45:12.956045 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp" (OuterVolumeSpecName: "kube-api-access-hqzdp") pod "53da7be9-df28-4c12-ba5f-1c7db24893d3" (UID: "53da7be9-df28-4c12-ba5f-1c7db24893d3"). InnerVolumeSpecName "kube-api-access-hqzdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.052196 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.052233 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqzdp\" (UniqueName: \"kubernetes.io/projected/53da7be9-df28-4c12-ba5f-1c7db24893d3-kube-api-access-hqzdp\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.293826 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util" (OuterVolumeSpecName: "util") pod "53da7be9-df28-4c12-ba5f-1c7db24893d3" (UID: "53da7be9-df28-4c12-ba5f-1c7db24893d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.356201 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53da7be9-df28-4c12-ba5f-1c7db24893d3-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.577196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" event={"ID":"53da7be9-df28-4c12-ba5f-1c7db24893d3","Type":"ContainerDied","Data":"ef20d5d3798996110e38e15a27b78d9a330ad2c727c4b53799e8d5e4e0c6342c"} Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.577235 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6" Jan 27 14:45:13 crc kubenswrapper[4698]: I0127 14:45:13.577242 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef20d5d3798996110e38e15a27b78d9a330ad2c727c4b53799e8d5e4e0c6342c" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.215739 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2"] Jan 27 14:45:23 crc kubenswrapper[4698]: E0127 14:45:23.216583 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216598 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" Jan 27 14:45:23 crc kubenswrapper[4698]: E0127 14:45:23.216612 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="pull" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216619 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="pull" Jan 27 14:45:23 crc kubenswrapper[4698]: E0127 14:45:23.216648 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="extract" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216657 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="extract" Jan 27 14:45:23 crc kubenswrapper[4698]: E0127 14:45:23.216674 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="util" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216681 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="util" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216814 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b42d9a-df65-4a89-8961-1fa7f9b8a14b" containerName="console" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.216837 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53da7be9-df28-4c12-ba5f-1c7db24893d3" containerName="extract" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.217311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.219390 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.219515 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.219552 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.219899 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xwl8t" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.220109 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.226431 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2"] Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.378506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.378705 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wslz\" (UniqueName: \"kubernetes.io/projected/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-kube-api-access-2wslz\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.378742 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-webhook-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.480295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wslz\" (UniqueName: \"kubernetes.io/projected/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-kube-api-access-2wslz\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.480357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-webhook-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.480381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.489584 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-webhook-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.503703 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.512131 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8"] Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.513052 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.516895 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.517151 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xtgwd" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.517281 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.517333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wslz\" (UniqueName: \"kubernetes.io/projected/ce6de35a-b78b-4e2e-87e0-30608c3ee8a6-kube-api-access-2wslz\") pod \"metallb-operator-controller-manager-54b596d688-wqmn2\" (UID: \"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6\") " pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.539748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.562941 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8"] Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.683739 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-webhook-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.683813 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4bd9\" (UniqueName: \"kubernetes.io/projected/7ae0807b-48fe-425b-b7c7-72692c491175-kube-api-access-p4bd9\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.683868 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-apiservice-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.784831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-apiservice-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.785012 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-webhook-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.786094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4bd9\" (UniqueName: \"kubernetes.io/projected/7ae0807b-48fe-425b-b7c7-72692c491175-kube-api-access-p4bd9\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.790726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-apiservice-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.791239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae0807b-48fe-425b-b7c7-72692c491175-webhook-cert\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.803150 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4bd9\" (UniqueName: \"kubernetes.io/projected/7ae0807b-48fe-425b-b7c7-72692c491175-kube-api-access-p4bd9\") pod \"metallb-operator-webhook-server-54f6556b57-vxgv8\" (UID: \"7ae0807b-48fe-425b-b7c7-72692c491175\") " pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:23 crc kubenswrapper[4698]: I0127 14:45:23.950083 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:24 crc kubenswrapper[4698]: I0127 14:45:24.092274 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2"] Jan 27 14:45:24 crc kubenswrapper[4698]: I0127 14:45:24.358354 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8"] Jan 27 14:45:24 crc kubenswrapper[4698]: W0127 14:45:24.363945 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ae0807b_48fe_425b_b7c7_72692c491175.slice/crio-e3dd0d0d03de41bf2849806aa59395980ab6698c037137b1231b48ac102bc873 WatchSource:0}: Error finding container e3dd0d0d03de41bf2849806aa59395980ab6698c037137b1231b48ac102bc873: Status 404 returned error can't find the container with id e3dd0d0d03de41bf2849806aa59395980ab6698c037137b1231b48ac102bc873 Jan 27 14:45:24 crc kubenswrapper[4698]: I0127 14:45:24.639526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" event={"ID":"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6","Type":"ContainerStarted","Data":"5bc7a4cddc0ee4cdf965f565fe56a5df28e745140e7d67cc03a3f8b968621ae6"} Jan 27 14:45:24 crc kubenswrapper[4698]: I0127 14:45:24.640737 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" event={"ID":"7ae0807b-48fe-425b-b7c7-72692c491175","Type":"ContainerStarted","Data":"e3dd0d0d03de41bf2849806aa59395980ab6698c037137b1231b48ac102bc873"} Jan 27 14:45:27 crc kubenswrapper[4698]: I0127 14:45:27.452506 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:45:27 crc kubenswrapper[4698]: I0127 14:45:27.453123 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.677751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" event={"ID":"ce6de35a-b78b-4e2e-87e0-30608c3ee8a6","Type":"ContainerStarted","Data":"7da927ac3e28d42fc65f752c351ddfaf271ca3101caad46b30fd36c542125295"} Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.678365 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.680429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" event={"ID":"7ae0807b-48fe-425b-b7c7-72692c491175","Type":"ContainerStarted","Data":"eacd60e32df223e1979b3e36240dfc7ca3d912a3c8a2d3222f0b5371a27db7a4"} Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.680622 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.699467 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" podStartSLOduration=1.517009055 podStartE2EDuration="7.699449202s" podCreationTimestamp="2026-01-27 14:45:23 +0000 UTC" firstStartedPulling="2026-01-27 14:45:24.125974387 +0000 UTC m=+979.802751852" lastFinishedPulling="2026-01-27 14:45:30.308414534 +0000 UTC m=+985.985191999" observedRunningTime="2026-01-27 14:45:30.697411548 +0000 UTC m=+986.374189013" watchObservedRunningTime="2026-01-27 14:45:30.699449202 +0000 UTC m=+986.376226667" Jan 27 14:45:30 crc kubenswrapper[4698]: I0127 14:45:30.720221 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" podStartSLOduration=1.792991975 podStartE2EDuration="7.720199396s" podCreationTimestamp="2026-01-27 14:45:23 +0000 UTC" firstStartedPulling="2026-01-27 14:45:24.366339113 +0000 UTC m=+980.043116588" lastFinishedPulling="2026-01-27 14:45:30.293546544 +0000 UTC m=+985.970324009" observedRunningTime="2026-01-27 14:45:30.71844232 +0000 UTC m=+986.395219785" watchObservedRunningTime="2026-01-27 14:45:30.720199396 +0000 UTC m=+986.396976861" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.388649 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.390280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.402468 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.483672 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnn2k\" (UniqueName: \"kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.483749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.483792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.585309 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.585662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.585839 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnn2k\" (UniqueName: \"kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.585888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.586311 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.613562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnn2k\" (UniqueName: \"kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k\") pod \"certified-operators-zq6hp\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:38 crc kubenswrapper[4698]: I0127 14:45:38.709891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:39 crc kubenswrapper[4698]: I0127 14:45:39.206354 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:39 crc kubenswrapper[4698]: W0127 14:45:39.215271 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e2f75f_da71_4f0b_a5a6_4e1ab9c37e21.slice/crio-96f260c736ed55748a9401fe9e27cfc42b9604dd059e8176f884546dbb7d192d WatchSource:0}: Error finding container 96f260c736ed55748a9401fe9e27cfc42b9604dd059e8176f884546dbb7d192d: Status 404 returned error can't find the container with id 96f260c736ed55748a9401fe9e27cfc42b9604dd059e8176f884546dbb7d192d Jan 27 14:45:39 crc kubenswrapper[4698]: I0127 14:45:39.730868 4698 generic.go:334] "Generic (PLEG): container finished" podID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerID="917e153fcdbbb13589a5007ff5042c1654d57ffcd87ef1b6a28704ee4f0a8e0b" exitCode=0 Jan 27 14:45:39 crc kubenswrapper[4698]: I0127 14:45:39.731220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerDied","Data":"917e153fcdbbb13589a5007ff5042c1654d57ffcd87ef1b6a28704ee4f0a8e0b"} Jan 27 14:45:39 crc kubenswrapper[4698]: I0127 14:45:39.731257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerStarted","Data":"96f260c736ed55748a9401fe9e27cfc42b9604dd059e8176f884546dbb7d192d"} Jan 27 14:45:40 crc kubenswrapper[4698]: I0127 14:45:40.739080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerStarted","Data":"c5333e5eb733d8f05d3339c948e95d2257a9b488eb5d0164f51044198b71dfd1"} Jan 27 14:45:41 crc kubenswrapper[4698]: I0127 14:45:41.747811 4698 generic.go:334] "Generic (PLEG): container finished" podID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerID="c5333e5eb733d8f05d3339c948e95d2257a9b488eb5d0164f51044198b71dfd1" exitCode=0 Jan 27 14:45:41 crc kubenswrapper[4698]: I0127 14:45:41.747855 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerDied","Data":"c5333e5eb733d8f05d3339c948e95d2257a9b488eb5d0164f51044198b71dfd1"} Jan 27 14:45:43 crc kubenswrapper[4698]: I0127 14:45:43.762814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerStarted","Data":"0cc52da75cd790d45cbbadbb1d7e17f35a8ca26bd4ac13c1149642de84555095"} Jan 27 14:45:43 crc kubenswrapper[4698]: I0127 14:45:43.782233 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zq6hp" podStartSLOduration=2.428922287 podStartE2EDuration="5.782215274s" podCreationTimestamp="2026-01-27 14:45:38 +0000 UTC" firstStartedPulling="2026-01-27 14:45:39.732057014 +0000 UTC m=+995.408834479" lastFinishedPulling="2026-01-27 14:45:43.085349991 +0000 UTC m=+998.762127466" observedRunningTime="2026-01-27 14:45:43.778432576 +0000 UTC m=+999.455210051" watchObservedRunningTime="2026-01-27 14:45:43.782215274 +0000 UTC m=+999.458992749" Jan 27 14:45:43 crc kubenswrapper[4698]: I0127 14:45:43.986098 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-54f6556b57-vxgv8" Jan 27 14:45:48 crc kubenswrapper[4698]: I0127 14:45:48.710472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:48 crc kubenswrapper[4698]: I0127 14:45:48.711110 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:48 crc kubenswrapper[4698]: I0127 14:45:48.766239 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:48 crc kubenswrapper[4698]: I0127 14:45:48.848784 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:49 crc kubenswrapper[4698]: I0127 14:45:49.970224 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:50 crc kubenswrapper[4698]: I0127 14:45:50.816815 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zq6hp" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="registry-server" containerID="cri-o://0cc52da75cd790d45cbbadbb1d7e17f35a8ca26bd4ac13c1149642de84555095" gracePeriod=2 Jan 27 14:45:51 crc kubenswrapper[4698]: I0127 14:45:51.825614 4698 generic.go:334] "Generic (PLEG): container finished" podID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerID="0cc52da75cd790d45cbbadbb1d7e17f35a8ca26bd4ac13c1149642de84555095" exitCode=0 Jan 27 14:45:51 crc kubenswrapper[4698]: I0127 14:45:51.825686 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerDied","Data":"0cc52da75cd790d45cbbadbb1d7e17f35a8ca26bd4ac13c1149642de84555095"} Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.414391 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.562111 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnn2k\" (UniqueName: \"kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k\") pod \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.562230 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content\") pod \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.562288 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities\") pod \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\" (UID: \"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21\") " Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.563334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities" (OuterVolumeSpecName: "utilities") pod "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" (UID: "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.574971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k" (OuterVolumeSpecName: "kube-api-access-dnn2k") pod "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" (UID: "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21"). InnerVolumeSpecName "kube-api-access-dnn2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.615278 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" (UID: "53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.664349 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnn2k\" (UniqueName: \"kubernetes.io/projected/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-kube-api-access-dnn2k\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.664401 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.664413 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.835557 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq6hp" event={"ID":"53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21","Type":"ContainerDied","Data":"96f260c736ed55748a9401fe9e27cfc42b9604dd059e8176f884546dbb7d192d"} Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.835630 4698 scope.go:117] "RemoveContainer" containerID="0cc52da75cd790d45cbbadbb1d7e17f35a8ca26bd4ac13c1149642de84555095" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.835655 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq6hp" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.852541 4698 scope.go:117] "RemoveContainer" containerID="c5333e5eb733d8f05d3339c948e95d2257a9b488eb5d0164f51044198b71dfd1" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.869957 4698 scope.go:117] "RemoveContainer" containerID="917e153fcdbbb13589a5007ff5042c1654d57ffcd87ef1b6a28704ee4f0a8e0b" Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.876788 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.882543 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zq6hp"] Jan 27 14:45:52 crc kubenswrapper[4698]: I0127 14:45:52.999966 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" path="/var/lib/kubelet/pods/53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21/volumes" Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.452588 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.452996 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.453065 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.453732 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.453792 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0" gracePeriod=600 Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.871184 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0" exitCode=0 Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.871231 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0"} Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.871263 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97"} Jan 27 14:45:57 crc kubenswrapper[4698]: I0127 14:45:57.871283 4698 scope.go:117] "RemoveContainer" containerID="6582ea9fc85bfcf7cda9ed10da113c6bdd3405f16aaba0460f6cb69b57c13ba5" Jan 27 14:46:03 crc kubenswrapper[4698]: I0127 14:46:03.542661 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-54b596d688-wqmn2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.314729 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-n5rs2"] Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.314993 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="extract-content" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.315008 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="extract-content" Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.315026 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="extract-utilities" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.315033 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="extract-utilities" Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.315045 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="registry-server" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.315053 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="registry-server" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.315173 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e2f75f-da71-4f0b-a5a6-4e1ab9c37e21" containerName="registry-server" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.317117 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.319410 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.319654 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.321042 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wzxhn" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.340406 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl"] Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.341685 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.344455 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.367526 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl"] Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-reloader\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-startup\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-sockets\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423370 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7f5\" (UniqueName: \"kubernetes.io/projected/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-kube-api-access-6v7f5\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.423407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-conf\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.427587 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wzjsq"] Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.428510 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.430924 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.430972 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.431019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.432292 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9ttvp" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.468138 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-p9fxw"] Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.469351 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.472994 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.500843 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p9fxw"] Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524473 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/09f854d2-d03b-492d-9e84-b6494a6f956a-metallb-excludel2\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jnkg\" (UniqueName: \"kubernetes.io/projected/c32a50d1-66c4-4627-924a-21d72a27b3d0-kube-api-access-8jnkg\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-reloader\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524611 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-startup\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524681 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.524887 4698 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524909 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-sockets\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.524945 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs podName:d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02 nodeName:}" failed. No retries permitted until 2026-01-27 14:46:05.024925689 +0000 UTC m=+1020.701703204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs") pod "frr-k8s-n5rs2" (UID: "d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02") : secret "frr-k8s-certs-secret" not found Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7f5\" (UniqueName: \"kubernetes.io/projected/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-kube-api-access-6v7f5\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.524986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32a50d1-66c4-4627-924a-21d72a27b3d0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525026 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshlt\" (UniqueName: \"kubernetes.io/projected/09f854d2-d03b-492d-9e84-b6494a6f956a-kube-api-access-sshlt\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525063 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-conf\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-reloader\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-sockets\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525329 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-conf\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.525883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-frr-startup\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.526441 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.567184 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7f5\" (UniqueName: \"kubernetes.io/projected/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-kube-api-access-6v7f5\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.626537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjppp\" (UniqueName: \"kubernetes.io/projected/ce91ea9b-8af1-41dc-9104-7fb695211734-kube-api-access-cjppp\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627038 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/09f854d2-d03b-492d-9e84-b6494a6f956a-metallb-excludel2\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627069 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-cert\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627093 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jnkg\" (UniqueName: \"kubernetes.io/projected/c32a50d1-66c4-4627-924a-21d72a27b3d0-kube-api-access-8jnkg\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627143 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627175 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-metrics-certs\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627433 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32a50d1-66c4-4627-924a-21d72a27b3d0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.627566 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshlt\" (UniqueName: \"kubernetes.io/projected/09f854d2-d03b-492d-9e84-b6494a6f956a-kube-api-access-sshlt\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.627562 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.627562 4698 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.627713 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist podName:09f854d2-d03b-492d-9e84-b6494a6f956a nodeName:}" failed. No retries permitted until 2026-01-27 14:46:05.12769208 +0000 UTC m=+1020.804469545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist") pod "speaker-wzjsq" (UID: "09f854d2-d03b-492d-9e84-b6494a6f956a") : secret "metallb-memberlist" not found Jan 27 14:46:04 crc kubenswrapper[4698]: E0127 14:46:04.627778 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs podName:09f854d2-d03b-492d-9e84-b6494a6f956a nodeName:}" failed. No retries permitted until 2026-01-27 14:46:05.127761562 +0000 UTC m=+1020.804539027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs") pod "speaker-wzjsq" (UID: "09f854d2-d03b-492d-9e84-b6494a6f956a") : secret "speaker-certs-secret" not found Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.628226 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/09f854d2-d03b-492d-9e84-b6494a6f956a-metallb-excludel2\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.632265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c32a50d1-66c4-4627-924a-21d72a27b3d0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.650511 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jnkg\" (UniqueName: \"kubernetes.io/projected/c32a50d1-66c4-4627-924a-21d72a27b3d0-kube-api-access-8jnkg\") pod \"frr-k8s-webhook-server-7df86c4f6c-jtlnl\" (UID: \"c32a50d1-66c4-4627-924a-21d72a27b3d0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.652581 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshlt\" (UniqueName: \"kubernetes.io/projected/09f854d2-d03b-492d-9e84-b6494a6f956a-kube-api-access-sshlt\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.665389 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.728718 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjppp\" (UniqueName: \"kubernetes.io/projected/ce91ea9b-8af1-41dc-9104-7fb695211734-kube-api-access-cjppp\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.728774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-cert\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.728838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-metrics-certs\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.734869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-metrics-certs\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.735415 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce91ea9b-8af1-41dc-9104-7fb695211734-cert\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.757556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjppp\" (UniqueName: \"kubernetes.io/projected/ce91ea9b-8af1-41dc-9104-7fb695211734-kube-api-access-cjppp\") pod \"controller-6968d8fdc4-p9fxw\" (UID: \"ce91ea9b-8af1-41dc-9104-7fb695211734\") " pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:04 crc kubenswrapper[4698]: I0127 14:46:04.786255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.027228 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p9fxw"] Jan 27 14:46:05 crc kubenswrapper[4698]: W0127 14:46:05.035812 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce91ea9b_8af1_41dc_9104_7fb695211734.slice/crio-196538cf5456fab11321d0173e512ad18845cdaa08b11b43b960f33738a86691 WatchSource:0}: Error finding container 196538cf5456fab11321d0173e512ad18845cdaa08b11b43b960f33738a86691: Status 404 returned error can't find the container with id 196538cf5456fab11321d0173e512ad18845cdaa08b11b43b960f33738a86691 Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.040327 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.051915 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02-metrics-certs\") pod \"frr-k8s-n5rs2\" (UID: \"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02\") " pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.134749 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl"] Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.141773 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.141873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:05 crc kubenswrapper[4698]: E0127 14:46:05.142013 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:46:05 crc kubenswrapper[4698]: E0127 14:46:05.142070 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist podName:09f854d2-d03b-492d-9e84-b6494a6f956a nodeName:}" failed. No retries permitted until 2026-01-27 14:46:06.142054298 +0000 UTC m=+1021.818831763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist") pod "speaker-wzjsq" (UID: "09f854d2-d03b-492d-9e84-b6494a6f956a") : secret "metallb-memberlist" not found Jan 27 14:46:05 crc kubenswrapper[4698]: W0127 14:46:05.143375 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32a50d1_66c4_4627_924a_21d72a27b3d0.slice/crio-93c8e9b24daea7e59e61489e05a455f7888d7c3366bb09bf280782fc030dfd21 WatchSource:0}: Error finding container 93c8e9b24daea7e59e61489e05a455f7888d7c3366bb09bf280782fc030dfd21: Status 404 returned error can't find the container with id 93c8e9b24daea7e59e61489e05a455f7888d7c3366bb09bf280782fc030dfd21 Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.147493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-metrics-certs\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.240619 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.924914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p9fxw" event={"ID":"ce91ea9b-8af1-41dc-9104-7fb695211734","Type":"ContainerStarted","Data":"6148983584e3c336fed22260a14c30c4a6b9d29992e13b271f4500a4a453ff6c"} Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.924963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p9fxw" event={"ID":"ce91ea9b-8af1-41dc-9104-7fb695211734","Type":"ContainerStarted","Data":"2e10fc6095a315e63168ef2ec9abd5fae4a81472e9247c17dd6cfe64d599f008"} Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.924976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p9fxw" event={"ID":"ce91ea9b-8af1-41dc-9104-7fb695211734","Type":"ContainerStarted","Data":"196538cf5456fab11321d0173e512ad18845cdaa08b11b43b960f33738a86691"} Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.926046 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.929191 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"26910e3aecdc1f831b7b4f6ef32aa2c236719483f92ff812a9645bd8ddc09a74"} Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.930356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" event={"ID":"c32a50d1-66c4-4627-924a-21d72a27b3d0","Type":"ContainerStarted","Data":"93c8e9b24daea7e59e61489e05a455f7888d7c3366bb09bf280782fc030dfd21"} Jan 27 14:46:05 crc kubenswrapper[4698]: I0127 14:46:05.948477 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-p9fxw" podStartSLOduration=1.948455721 podStartE2EDuration="1.948455721s" podCreationTimestamp="2026-01-27 14:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:46:05.943452799 +0000 UTC m=+1021.620230274" watchObservedRunningTime="2026-01-27 14:46:05.948455721 +0000 UTC m=+1021.625233186" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.153577 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.163747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/09f854d2-d03b-492d-9e84-b6494a6f956a-memberlist\") pod \"speaker-wzjsq\" (UID: \"09f854d2-d03b-492d-9e84-b6494a6f956a\") " pod="metallb-system/speaker-wzjsq" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.244538 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9ttvp" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.253847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wzjsq" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.941549 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wzjsq" event={"ID":"09f854d2-d03b-492d-9e84-b6494a6f956a","Type":"ContainerStarted","Data":"2a299aac63ab63f9ba8a226a61fabf5043f454d61c1a156edb82f006d230caa3"} Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.941850 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wzjsq" event={"ID":"09f854d2-d03b-492d-9e84-b6494a6f956a","Type":"ContainerStarted","Data":"14ce8b5d1adac8166ed7ee57a7bdafb6aa96201c686c682b5a297ff3b03485a3"} Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.941860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wzjsq" event={"ID":"09f854d2-d03b-492d-9e84-b6494a6f956a","Type":"ContainerStarted","Data":"1781b49031e9333e25a38829055f5ad7c2dc7dfb06d738ab8727badfc39c9170"} Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.942362 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wzjsq" Jan 27 14:46:06 crc kubenswrapper[4698]: I0127 14:46:06.970900 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wzjsq" podStartSLOduration=2.97087797 podStartE2EDuration="2.97087797s" podCreationTimestamp="2026-01-27 14:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:46:06.964670538 +0000 UTC m=+1022.641448023" watchObservedRunningTime="2026-01-27 14:46:06.97087797 +0000 UTC m=+1022.647655435" Jan 27 14:46:12 crc kubenswrapper[4698]: I0127 14:46:12.987220 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02" containerID="fd4c4f56d747a6b4729fb05ed5349c41224a0cb8e41915ce2ca6907696c94fa7" exitCode=0 Jan 27 14:46:12 crc kubenswrapper[4698]: I0127 14:46:12.987288 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerDied","Data":"fd4c4f56d747a6b4729fb05ed5349c41224a0cb8e41915ce2ca6907696c94fa7"} Jan 27 14:46:12 crc kubenswrapper[4698]: I0127 14:46:12.989680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" event={"ID":"c32a50d1-66c4-4627-924a-21d72a27b3d0","Type":"ContainerStarted","Data":"5becd6f28edce36c2a90b7ffc98f17693ec8d1cce5a2ff6a68461a0f008a02f5"} Jan 27 14:46:12 crc kubenswrapper[4698]: I0127 14:46:12.989924 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:13 crc kubenswrapper[4698]: I0127 14:46:13.028687 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" podStartSLOduration=1.8184926350000001 podStartE2EDuration="9.028668194s" podCreationTimestamp="2026-01-27 14:46:04 +0000 UTC" firstStartedPulling="2026-01-27 14:46:05.146302809 +0000 UTC m=+1020.823080274" lastFinishedPulling="2026-01-27 14:46:12.356478368 +0000 UTC m=+1028.033255833" observedRunningTime="2026-01-27 14:46:13.027944934 +0000 UTC m=+1028.704722389" watchObservedRunningTime="2026-01-27 14:46:13.028668194 +0000 UTC m=+1028.705445659" Jan 27 14:46:13 crc kubenswrapper[4698]: I0127 14:46:13.998191 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02" containerID="75896b0a6341c282848039f27b9e49050039e56ab6d112134cf13747a3a1e665" exitCode=0 Jan 27 14:46:13 crc kubenswrapper[4698]: I0127 14:46:13.998249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerDied","Data":"75896b0a6341c282848039f27b9e49050039e56ab6d112134cf13747a3a1e665"} Jan 27 14:46:15 crc kubenswrapper[4698]: I0127 14:46:15.008905 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02" containerID="329f0924d1f0a02a153167f4244f860d4c96279b3c24661753241abbd4ab1cd6" exitCode=0 Jan 27 14:46:15 crc kubenswrapper[4698]: I0127 14:46:15.008987 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerDied","Data":"329f0924d1f0a02a153167f4244f860d4c96279b3c24661753241abbd4ab1cd6"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020590 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"7e215548a7365e52abeeefce1123639e95a8746ebdee5e4aaa72237aa4eda1c1"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020876 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"d16b011d66e54cfb6d682a4adb4e42136881e1053c2fce36e6740babebb9bbfb"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020890 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020901 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"6f8e3ca124e45bd3bc36802c1fee79cf85b4889efd40d5cb202e3d64276af729"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020911 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"d59b8704b6e15e9c38c38cc310e458d41f29822dee72bf4fad9c0801f60cb612"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"619880438c0636053f17f1f515cdb0ebd2adcfaa252b9509a32b271613845a99"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.020932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n5rs2" event={"ID":"d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02","Type":"ContainerStarted","Data":"24314aa0a2b8605ef7ef441825c5715e4ee8e6e2bc6129e80b9ebd0b13a992f9"} Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.043165 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-n5rs2" podStartSLOduration=4.996421753 podStartE2EDuration="12.043148776s" podCreationTimestamp="2026-01-27 14:46:04 +0000 UTC" firstStartedPulling="2026-01-27 14:46:05.330744487 +0000 UTC m=+1021.007521952" lastFinishedPulling="2026-01-27 14:46:12.37747151 +0000 UTC m=+1028.054248975" observedRunningTime="2026-01-27 14:46:16.042391796 +0000 UTC m=+1031.719169301" watchObservedRunningTime="2026-01-27 14:46:16.043148776 +0000 UTC m=+1031.719926251" Jan 27 14:46:16 crc kubenswrapper[4698]: I0127 14:46:16.258347 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wzjsq" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.416134 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.418509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.421318 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kxjvf" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.424863 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.425513 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.438287 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.452472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hcx\" (UniqueName: \"kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx\") pod \"openstack-operator-index-qtfpc\" (UID: \"5ca6a245-5237-4b2d-92df-24e66956bb62\") " pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.556592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96hcx\" (UniqueName: \"kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx\") pod \"openstack-operator-index-qtfpc\" (UID: \"5ca6a245-5237-4b2d-92df-24e66956bb62\") " pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.587198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96hcx\" (UniqueName: \"kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx\") pod \"openstack-operator-index-qtfpc\" (UID: \"5ca6a245-5237-4b2d-92df-24e66956bb62\") " pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.741591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:19 crc kubenswrapper[4698]: I0127 14:46:19.958301 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:20 crc kubenswrapper[4698]: I0127 14:46:20.052146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qtfpc" event={"ID":"5ca6a245-5237-4b2d-92df-24e66956bb62","Type":"ContainerStarted","Data":"3b7838c6a3b1a331309fc216bc67cc598f8f85688f0c6883ff16e402e2f7a7da"} Jan 27 14:46:20 crc kubenswrapper[4698]: I0127 14:46:20.241982 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:20 crc kubenswrapper[4698]: I0127 14:46:20.279902 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:22 crc kubenswrapper[4698]: I0127 14:46:22.800173 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.404674 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cj9s9"] Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.405861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.417036 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj9s9"] Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.517070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5scw\" (UniqueName: \"kubernetes.io/projected/83931357-c0eb-4337-91da-cf623496c4ef-kube-api-access-z5scw\") pod \"openstack-operator-index-cj9s9\" (UID: \"83931357-c0eb-4337-91da-cf623496c4ef\") " pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.618494 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5scw\" (UniqueName: \"kubernetes.io/projected/83931357-c0eb-4337-91da-cf623496c4ef-kube-api-access-z5scw\") pod \"openstack-operator-index-cj9s9\" (UID: \"83931357-c0eb-4337-91da-cf623496c4ef\") " pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.638098 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5scw\" (UniqueName: \"kubernetes.io/projected/83931357-c0eb-4337-91da-cf623496c4ef-kube-api-access-z5scw\") pod \"openstack-operator-index-cj9s9\" (UID: \"83931357-c0eb-4337-91da-cf623496c4ef\") " pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:23 crc kubenswrapper[4698]: I0127 14:46:23.735789 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:24 crc kubenswrapper[4698]: I0127 14:46:24.131811 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj9s9"] Jan 27 14:46:24 crc kubenswrapper[4698]: W0127 14:46:24.134418 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83931357_c0eb_4337_91da_cf623496c4ef.slice/crio-78ab30c55c8d78e53587974106a90bafca0668f5b56ca9f7eec96536f2d49920 WatchSource:0}: Error finding container 78ab30c55c8d78e53587974106a90bafca0668f5b56ca9f7eec96536f2d49920: Status 404 returned error can't find the container with id 78ab30c55c8d78e53587974106a90bafca0668f5b56ca9f7eec96536f2d49920 Jan 27 14:46:24 crc kubenswrapper[4698]: I0127 14:46:24.670350 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jtlnl" Jan 27 14:46:24 crc kubenswrapper[4698]: I0127 14:46:24.792493 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-p9fxw" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.082989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj9s9" event={"ID":"83931357-c0eb-4337-91da-cf623496c4ef","Type":"ContainerStarted","Data":"4adfdd15ea72cd39374bd5fbd513c97af0046cb44d6e1ef42728c59a49513283"} Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.083037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj9s9" event={"ID":"83931357-c0eb-4337-91da-cf623496c4ef","Type":"ContainerStarted","Data":"78ab30c55c8d78e53587974106a90bafca0668f5b56ca9f7eec96536f2d49920"} Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.084352 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qtfpc" event={"ID":"5ca6a245-5237-4b2d-92df-24e66956bb62","Type":"ContainerStarted","Data":"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d"} Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.084446 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qtfpc" podUID="5ca6a245-5237-4b2d-92df-24e66956bb62" containerName="registry-server" containerID="cri-o://7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d" gracePeriod=2 Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.100742 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cj9s9" podStartSLOduration=1.91219788 podStartE2EDuration="2.100723425s" podCreationTimestamp="2026-01-27 14:46:23 +0000 UTC" firstStartedPulling="2026-01-27 14:46:24.137912001 +0000 UTC m=+1039.814689466" lastFinishedPulling="2026-01-27 14:46:24.326437546 +0000 UTC m=+1040.003215011" observedRunningTime="2026-01-27 14:46:25.097421977 +0000 UTC m=+1040.774199452" watchObservedRunningTime="2026-01-27 14:46:25.100723425 +0000 UTC m=+1040.777500890" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.114615 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qtfpc" podStartSLOduration=2.077326297 podStartE2EDuration="6.11459564s" podCreationTimestamp="2026-01-27 14:46:19 +0000 UTC" firstStartedPulling="2026-01-27 14:46:19.980007218 +0000 UTC m=+1035.656784683" lastFinishedPulling="2026-01-27 14:46:24.017276561 +0000 UTC m=+1039.694054026" observedRunningTime="2026-01-27 14:46:25.109755552 +0000 UTC m=+1040.786533027" watchObservedRunningTime="2026-01-27 14:46:25.11459564 +0000 UTC m=+1040.791373115" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.244530 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n5rs2" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.459577 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.546319 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96hcx\" (UniqueName: \"kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx\") pod \"5ca6a245-5237-4b2d-92df-24e66956bb62\" (UID: \"5ca6a245-5237-4b2d-92df-24e66956bb62\") " Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.553128 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx" (OuterVolumeSpecName: "kube-api-access-96hcx") pod "5ca6a245-5237-4b2d-92df-24e66956bb62" (UID: "5ca6a245-5237-4b2d-92df-24e66956bb62"). InnerVolumeSpecName "kube-api-access-96hcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:25 crc kubenswrapper[4698]: I0127 14:46:25.647334 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96hcx\" (UniqueName: \"kubernetes.io/projected/5ca6a245-5237-4b2d-92df-24e66956bb62-kube-api-access-96hcx\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.094579 4698 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a245-5237-4b2d-92df-24e66956bb62" containerID="7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d" exitCode=0 Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.094623 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qtfpc" Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.094682 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qtfpc" event={"ID":"5ca6a245-5237-4b2d-92df-24e66956bb62","Type":"ContainerDied","Data":"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d"} Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.094749 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qtfpc" event={"ID":"5ca6a245-5237-4b2d-92df-24e66956bb62","Type":"ContainerDied","Data":"3b7838c6a3b1a331309fc216bc67cc598f8f85688f0c6883ff16e402e2f7a7da"} Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.094774 4698 scope.go:117] "RemoveContainer" containerID="7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d" Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.128869 4698 scope.go:117] "RemoveContainer" containerID="7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d" Jan 27 14:46:26 crc kubenswrapper[4698]: E0127 14:46:26.130432 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d\": container with ID starting with 7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d not found: ID does not exist" containerID="7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d" Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.130486 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d"} err="failed to get container status \"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d\": rpc error: code = NotFound desc = could not find container \"7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d\": container with ID starting with 7579e24d70da7778df859e667b1bd0cc1ca8c9282d0e65c5c9a0aada30aaa91d not found: ID does not exist" Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.134765 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:26 crc kubenswrapper[4698]: I0127 14:46:26.138602 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qtfpc"] Jan 27 14:46:27 crc kubenswrapper[4698]: I0127 14:46:27.000515 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca6a245-5237-4b2d-92df-24e66956bb62" path="/var/lib/kubelet/pods/5ca6a245-5237-4b2d-92df-24e66956bb62/volumes" Jan 27 14:46:33 crc kubenswrapper[4698]: I0127 14:46:33.736900 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:33 crc kubenswrapper[4698]: I0127 14:46:33.737486 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:33 crc kubenswrapper[4698]: I0127 14:46:33.769128 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:34 crc kubenswrapper[4698]: I0127 14:46:34.173686 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cj9s9" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.443833 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt"] Jan 27 14:46:36 crc kubenswrapper[4698]: E0127 14:46:36.444254 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a245-5237-4b2d-92df-24e66956bb62" containerName="registry-server" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.444266 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a245-5237-4b2d-92df-24e66956bb62" containerName="registry-server" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.444364 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6a245-5237-4b2d-92df-24e66956bb62" containerName="registry-server" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.445150 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.447207 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-b8qkg" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.455128 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt"] Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.511080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr9vh\" (UniqueName: \"kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.511187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.511322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.612849 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.612962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr9vh\" (UniqueName: \"kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.613012 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.613594 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.613625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.632662 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr9vh\" (UniqueName: \"kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh\") pod \"cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:36 crc kubenswrapper[4698]: I0127 14:46:36.767405 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:37 crc kubenswrapper[4698]: I0127 14:46:37.166817 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt"] Jan 27 14:46:38 crc kubenswrapper[4698]: I0127 14:46:38.173841 4698 generic.go:334] "Generic (PLEG): container finished" podID="8b62a63b-7862-462d-a67e-864848915728" containerID="96945f1160912c98fe30fb30061e2957ac93ebc339e3c438f50e046e0d36483d" exitCode=0 Jan 27 14:46:38 crc kubenswrapper[4698]: I0127 14:46:38.173898 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" event={"ID":"8b62a63b-7862-462d-a67e-864848915728","Type":"ContainerDied","Data":"96945f1160912c98fe30fb30061e2957ac93ebc339e3c438f50e046e0d36483d"} Jan 27 14:46:38 crc kubenswrapper[4698]: I0127 14:46:38.173932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" event={"ID":"8b62a63b-7862-462d-a67e-864848915728","Type":"ContainerStarted","Data":"01d68cafe25be71853d9c12fe2a558df2f41d9dc2fc85198c8fbfa0cbed72917"} Jan 27 14:46:39 crc kubenswrapper[4698]: I0127 14:46:39.182727 4698 generic.go:334] "Generic (PLEG): container finished" podID="8b62a63b-7862-462d-a67e-864848915728" containerID="764decf19bd25d2f7c3fddd3d1a9d2cc67e468ebd93be838ffcaf1778e4db23c" exitCode=0 Jan 27 14:46:39 crc kubenswrapper[4698]: I0127 14:46:39.182806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" event={"ID":"8b62a63b-7862-462d-a67e-864848915728","Type":"ContainerDied","Data":"764decf19bd25d2f7c3fddd3d1a9d2cc67e468ebd93be838ffcaf1778e4db23c"} Jan 27 14:46:40 crc kubenswrapper[4698]: I0127 14:46:40.196327 4698 generic.go:334] "Generic (PLEG): container finished" podID="8b62a63b-7862-462d-a67e-864848915728" containerID="0675e9454d2676b36faa9805e4cb74e8c1844f19b98f10f657b18ea4d8a3cdd7" exitCode=0 Jan 27 14:46:40 crc kubenswrapper[4698]: I0127 14:46:40.196424 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" event={"ID":"8b62a63b-7862-462d-a67e-864848915728","Type":"ContainerDied","Data":"0675e9454d2676b36faa9805e4cb74e8c1844f19b98f10f657b18ea4d8a3cdd7"} Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.481375 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.579792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr9vh\" (UniqueName: \"kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh\") pod \"8b62a63b-7862-462d-a67e-864848915728\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.579960 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util\") pod \"8b62a63b-7862-462d-a67e-864848915728\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.580058 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle\") pod \"8b62a63b-7862-462d-a67e-864848915728\" (UID: \"8b62a63b-7862-462d-a67e-864848915728\") " Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.580845 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle" (OuterVolumeSpecName: "bundle") pod "8b62a63b-7862-462d-a67e-864848915728" (UID: "8b62a63b-7862-462d-a67e-864848915728"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.586284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh" (OuterVolumeSpecName: "kube-api-access-kr9vh") pod "8b62a63b-7862-462d-a67e-864848915728" (UID: "8b62a63b-7862-462d-a67e-864848915728"). InnerVolumeSpecName "kube-api-access-kr9vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.595622 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util" (OuterVolumeSpecName: "util") pod "8b62a63b-7862-462d-a67e-864848915728" (UID: "8b62a63b-7862-462d-a67e-864848915728"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.682069 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.682127 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b62a63b-7862-462d-a67e-864848915728-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:41 crc kubenswrapper[4698]: I0127 14:46:41.682143 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr9vh\" (UniqueName: \"kubernetes.io/projected/8b62a63b-7862-462d-a67e-864848915728-kube-api-access-kr9vh\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:42 crc kubenswrapper[4698]: I0127 14:46:42.211347 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" event={"ID":"8b62a63b-7862-462d-a67e-864848915728","Type":"ContainerDied","Data":"01d68cafe25be71853d9c12fe2a558df2f41d9dc2fc85198c8fbfa0cbed72917"} Jan 27 14:46:42 crc kubenswrapper[4698]: I0127 14:46:42.211604 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d68cafe25be71853d9c12fe2a558df2f41d9dc2fc85198c8fbfa0cbed72917" Jan 27 14:46:42 crc kubenswrapper[4698]: I0127 14:46:42.211409 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.469745 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf"] Jan 27 14:46:48 crc kubenswrapper[4698]: E0127 14:46:48.470342 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="util" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.470356 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="util" Jan 27 14:46:48 crc kubenswrapper[4698]: E0127 14:46:48.470379 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="pull" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.470386 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="pull" Jan 27 14:46:48 crc kubenswrapper[4698]: E0127 14:46:48.470397 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="extract" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.470403 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="extract" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.470524 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b62a63b-7862-462d-a67e-864848915728" containerName="extract" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.471030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.473211 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lwkl7" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.492522 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf"] Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.591842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsc5g\" (UniqueName: \"kubernetes.io/projected/fb90ab87-ea48-4a22-a991-2380fff4d554-kube-api-access-zsc5g\") pod \"openstack-operator-controller-init-7cd9855986-ns8gf\" (UID: \"fb90ab87-ea48-4a22-a991-2380fff4d554\") " pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.692885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsc5g\" (UniqueName: \"kubernetes.io/projected/fb90ab87-ea48-4a22-a991-2380fff4d554-kube-api-access-zsc5g\") pod \"openstack-operator-controller-init-7cd9855986-ns8gf\" (UID: \"fb90ab87-ea48-4a22-a991-2380fff4d554\") " pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.713145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsc5g\" (UniqueName: \"kubernetes.io/projected/fb90ab87-ea48-4a22-a991-2380fff4d554-kube-api-access-zsc5g\") pod \"openstack-operator-controller-init-7cd9855986-ns8gf\" (UID: \"fb90ab87-ea48-4a22-a991-2380fff4d554\") " pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:48 crc kubenswrapper[4698]: I0127 14:46:48.791030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:49 crc kubenswrapper[4698]: I0127 14:46:49.016243 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf"] Jan 27 14:46:49 crc kubenswrapper[4698]: I0127 14:46:49.263547 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" event={"ID":"fb90ab87-ea48-4a22-a991-2380fff4d554","Type":"ContainerStarted","Data":"3e862bb7df4260c47cd3f1fb0124954dc9a43bb6da541a53c4c40ea848911fdc"} Jan 27 14:46:54 crc kubenswrapper[4698]: I0127 14:46:54.304924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" event={"ID":"fb90ab87-ea48-4a22-a991-2380fff4d554","Type":"ContainerStarted","Data":"076b4d018af68b3af4b40b91d67e59f1a447fae1fa412b7966c495bf0c086195"} Jan 27 14:46:54 crc kubenswrapper[4698]: I0127 14:46:54.305778 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:46:54 crc kubenswrapper[4698]: I0127 14:46:54.333795 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" podStartSLOduration=1.668574673 podStartE2EDuration="6.333780208s" podCreationTimestamp="2026-01-27 14:46:48 +0000 UTC" firstStartedPulling="2026-01-27 14:46:49.029296763 +0000 UTC m=+1064.706074228" lastFinishedPulling="2026-01-27 14:46:53.694502298 +0000 UTC m=+1069.371279763" observedRunningTime="2026-01-27 14:46:54.332963867 +0000 UTC m=+1070.009741342" watchObservedRunningTime="2026-01-27 14:46:54.333780208 +0000 UTC m=+1070.010557673" Jan 27 14:46:58 crc kubenswrapper[4698]: I0127 14:46:58.793326 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7cd9855986-ns8gf" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.845259 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.846876 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.850853 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.851730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.855102 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b86h4" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.861412 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.864610 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sz4m4" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.865217 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.877315 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.878514 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.882958 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zbhdm" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.897207 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.905020 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.905861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.908747 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-62mns" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.934765 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.935820 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.943052 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf"] Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.951266 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bxv7q" Jan 27 14:47:19 crc kubenswrapper[4698]: I0127 14:47:19.998996 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.006896 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.009762 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t6twh" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44md2\" (UniqueName: \"kubernetes.io/projected/d77b4eac-bd81-41be-8a8c-6cb9c61bd242-kube-api-access-44md2\") pod \"heat-operator-controller-manager-594c8c9d5d-bfxhx\" (UID: \"d77b4eac-bd81-41be-8a8c-6cb9c61bd242\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpbx\" (UniqueName: \"kubernetes.io/projected/cd843e79-28e5-483b-8368-b344b5fc42ed-kube-api-access-xwpbx\") pod \"glance-operator-controller-manager-78fdd796fd-l7sxf\" (UID: \"cd843e79-28e5-483b-8368-b344b5fc42ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbxt\" (UniqueName: \"kubernetes.io/projected/a037e7f8-75bb-4a3a-a60e-e378b79e7a2c-kube-api-access-kdbxt\") pod \"horizon-operator-controller-manager-77d5c5b54f-7b68z\" (UID: \"a037e7f8-75bb-4a3a-a60e-e378b79e7a2c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044612 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczqt\" (UniqueName: \"kubernetes.io/projected/84e6b7df-451a-421d-9128-a73ee95124ca-kube-api-access-fczqt\") pod \"designate-operator-controller-manager-b45d7bf98-rwj7x\" (UID: \"84e6b7df-451a-421d-9128-a73ee95124ca\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qtx\" (UniqueName: \"kubernetes.io/projected/00d7ded4-a39f-4261-8f42-5762a7d28314-kube-api-access-d4qtx\") pod \"barbican-operator-controller-manager-7f86f8796f-zjw77\" (UID: \"00d7ded4-a39f-4261-8f42-5762a7d28314\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.044762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhwg\" (UniqueName: \"kubernetes.io/projected/fdf128da-b514-46a4-ba2a-488ed77088c0-kube-api-access-bvhwg\") pod \"cinder-operator-controller-manager-7478f7dbf9-zz49w\" (UID: \"fdf128da-b514-46a4-ba2a-488ed77088c0\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.051510 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.076524 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.116705 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.117696 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.123590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.123818 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xz25l" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.128779 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.130129 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.132733 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p9gkt" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.138610 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.139628 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.143966 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ch4rp" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczqt\" (UniqueName: \"kubernetes.io/projected/84e6b7df-451a-421d-9128-a73ee95124ca-kube-api-access-fczqt\") pod \"designate-operator-controller-manager-b45d7bf98-rwj7x\" (UID: \"84e6b7df-451a-421d-9128-a73ee95124ca\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg469\" (UniqueName: \"kubernetes.io/projected/3f9d43a0-d759-4627-9ac2-d48d281e6daf-kube-api-access-mg469\") pod \"ironic-operator-controller-manager-598f7747c9-ppdfb\" (UID: \"3f9d43a0-d759-4627-9ac2-d48d281e6daf\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146209 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qtx\" (UniqueName: \"kubernetes.io/projected/00d7ded4-a39f-4261-8f42-5762a7d28314-kube-api-access-d4qtx\") pod \"barbican-operator-controller-manager-7f86f8796f-zjw77\" (UID: \"00d7ded4-a39f-4261-8f42-5762a7d28314\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99hw\" (UniqueName: \"kubernetes.io/projected/095ba028-5504-4533-b759-edaa313a8e80-kube-api-access-z99hw\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146305 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhwg\" (UniqueName: \"kubernetes.io/projected/fdf128da-b514-46a4-ba2a-488ed77088c0-kube-api-access-bvhwg\") pod \"cinder-operator-controller-manager-7478f7dbf9-zz49w\" (UID: \"fdf128da-b514-46a4-ba2a-488ed77088c0\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dxt\" (UniqueName: \"kubernetes.io/projected/55c2e67b-e60f-4e4f-8322-35cc46986b8c-kube-api-access-f9dxt\") pod \"keystone-operator-controller-manager-b8b6d4659-tv9vl\" (UID: \"55c2e67b-e60f-4e4f-8322-35cc46986b8c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146374 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44md2\" (UniqueName: \"kubernetes.io/projected/d77b4eac-bd81-41be-8a8c-6cb9c61bd242-kube-api-access-44md2\") pod \"heat-operator-controller-manager-594c8c9d5d-bfxhx\" (UID: \"d77b4eac-bd81-41be-8a8c-6cb9c61bd242\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146400 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpbx\" (UniqueName: \"kubernetes.io/projected/cd843e79-28e5-483b-8368-b344b5fc42ed-kube-api-access-xwpbx\") pod \"glance-operator-controller-manager-78fdd796fd-l7sxf\" (UID: \"cd843e79-28e5-483b-8368-b344b5fc42ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.146496 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdbxt\" (UniqueName: \"kubernetes.io/projected/a037e7f8-75bb-4a3a-a60e-e378b79e7a2c-kube-api-access-kdbxt\") pod \"horizon-operator-controller-manager-77d5c5b54f-7b68z\" (UID: \"a037e7f8-75bb-4a3a-a60e-e378b79e7a2c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.157964 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.170064 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.179409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpbx\" (UniqueName: \"kubernetes.io/projected/cd843e79-28e5-483b-8368-b344b5fc42ed-kube-api-access-xwpbx\") pod \"glance-operator-controller-manager-78fdd796fd-l7sxf\" (UID: \"cd843e79-28e5-483b-8368-b344b5fc42ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.189066 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdbxt\" (UniqueName: \"kubernetes.io/projected/a037e7f8-75bb-4a3a-a60e-e378b79e7a2c-kube-api-access-kdbxt\") pod \"horizon-operator-controller-manager-77d5c5b54f-7b68z\" (UID: \"a037e7f8-75bb-4a3a-a60e-e378b79e7a2c\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.192607 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qtx\" (UniqueName: \"kubernetes.io/projected/00d7ded4-a39f-4261-8f42-5762a7d28314-kube-api-access-d4qtx\") pod \"barbican-operator-controller-manager-7f86f8796f-zjw77\" (UID: \"00d7ded4-a39f-4261-8f42-5762a7d28314\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.193235 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.193546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhwg\" (UniqueName: \"kubernetes.io/projected/fdf128da-b514-46a4-ba2a-488ed77088c0-kube-api-access-bvhwg\") pod \"cinder-operator-controller-manager-7478f7dbf9-zz49w\" (UID: \"fdf128da-b514-46a4-ba2a-488ed77088c0\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.204353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczqt\" (UniqueName: \"kubernetes.io/projected/84e6b7df-451a-421d-9128-a73ee95124ca-kube-api-access-fczqt\") pod \"designate-operator-controller-manager-b45d7bf98-rwj7x\" (UID: \"84e6b7df-451a-421d-9128-a73ee95124ca\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.204509 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44md2\" (UniqueName: \"kubernetes.io/projected/d77b4eac-bd81-41be-8a8c-6cb9c61bd242-kube-api-access-44md2\") pod \"heat-operator-controller-manager-594c8c9d5d-bfxhx\" (UID: \"d77b4eac-bd81-41be-8a8c-6cb9c61bd242\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.208644 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.209497 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.220104 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.220913 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.229228 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.231215 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.232134 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.232803 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hr2pf" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.243119 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-w6hb4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.243292 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xlgrw" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.247993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg469\" (UniqueName: \"kubernetes.io/projected/3f9d43a0-d759-4627-9ac2-d48d281e6daf-kube-api-access-mg469\") pod \"ironic-operator-controller-manager-598f7747c9-ppdfb\" (UID: \"3f9d43a0-d759-4627-9ac2-d48d281e6daf\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.248058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z99hw\" (UniqueName: \"kubernetes.io/projected/095ba028-5504-4533-b759-edaa313a8e80-kube-api-access-z99hw\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.248109 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9dxt\" (UniqueName: \"kubernetes.io/projected/55c2e67b-e60f-4e4f-8322-35cc46986b8c-kube-api-access-f9dxt\") pod \"keystone-operator-controller-manager-b8b6d4659-tv9vl\" (UID: \"55c2e67b-e60f-4e4f-8322-35cc46986b8c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.248142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.248290 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.248340 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert podName:095ba028-5504-4533-b759-edaa313a8e80 nodeName:}" failed. No retries permitted until 2026-01-27 14:47:20.748319788 +0000 UTC m=+1096.425097253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert") pod "infra-operator-controller-manager-694cf4f878-t9jb8" (UID: "095ba028-5504-4533-b759-edaa313a8e80") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.251773 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.261030 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.272790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.273040 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z99hw\" (UniqueName: \"kubernetes.io/projected/095ba028-5504-4533-b759-edaa313a8e80-kube-api-access-z99hw\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.290485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg469\" (UniqueName: \"kubernetes.io/projected/3f9d43a0-d759-4627-9ac2-d48d281e6daf-kube-api-access-mg469\") pod \"ironic-operator-controller-manager-598f7747c9-ppdfb\" (UID: \"3f9d43a0-d759-4627-9ac2-d48d281e6daf\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.291727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9dxt\" (UniqueName: \"kubernetes.io/projected/55c2e67b-e60f-4e4f-8322-35cc46986b8c-kube-api-access-f9dxt\") pod \"keystone-operator-controller-manager-b8b6d4659-tv9vl\" (UID: \"55c2e67b-e60f-4e4f-8322-35cc46986b8c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.310543 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.319375 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.320217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.323042 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6bx9s" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.323459 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.326576 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.337451 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.338828 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.342070 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l5sj9" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.349397 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql78\" (UniqueName: \"kubernetes.io/projected/68bcfa84-c19a-4686-b103-3164e0733af1-kube-api-access-vql78\") pod \"manila-operator-controller-manager-78c6999f6f-plttm\" (UID: \"68bcfa84-c19a-4686-b103-3164e0733af1\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.349518 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ksj\" (UniqueName: \"kubernetes.io/projected/0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b-kube-api-access-x8ksj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fq777\" (UID: \"0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.349560 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqrt\" (UniqueName: \"kubernetes.io/projected/f9f70b91-3596-4b3a-92b7-38db144afae1-kube-api-access-pnqrt\") pod \"neutron-operator-controller-manager-78d58447c5-g9n9r\" (UID: \"f9f70b91-3596-4b3a-92b7-38db144afae1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.358206 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.362202 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.376106 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.380498 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.382386 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5dhgb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.383157 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.384615 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.390319 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dn6pr" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.390334 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.390723 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.394756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.397083 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.398169 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fn9bk" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.399286 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.401056 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jm2pw" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.424511 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.434080 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.443949 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.455415 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kc9\" (UniqueName: \"kubernetes.io/projected/7bcb0020-f358-4d29-8fb1-78c62d473485-kube-api-access-94kc9\") pod \"nova-operator-controller-manager-7bdb645866-6mfh4\" (UID: \"7bcb0020-f358-4d29-8fb1-78c62d473485\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.455512 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ksj\" (UniqueName: \"kubernetes.io/projected/0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b-kube-api-access-x8ksj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fq777\" (UID: \"0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.455540 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dsmq\" (UniqueName: \"kubernetes.io/projected/de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3-kube-api-access-4dsmq\") pod \"octavia-operator-controller-manager-5f4cd88d46-b5zrs\" (UID: \"de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.455585 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqrt\" (UniqueName: \"kubernetes.io/projected/f9f70b91-3596-4b3a-92b7-38db144afae1-kube-api-access-pnqrt\") pod \"neutron-operator-controller-manager-78d58447c5-g9n9r\" (UID: \"f9f70b91-3596-4b3a-92b7-38db144afae1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.455690 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vql78\" (UniqueName: \"kubernetes.io/projected/68bcfa84-c19a-4686-b103-3164e0733af1-kube-api-access-vql78\") pod \"manila-operator-controller-manager-78c6999f6f-plttm\" (UID: \"68bcfa84-c19a-4686-b103-3164e0733af1\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.466350 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.478989 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.483934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.484444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqrt\" (UniqueName: \"kubernetes.io/projected/f9f70b91-3596-4b3a-92b7-38db144afae1-kube-api-access-pnqrt\") pod \"neutron-operator-controller-manager-78d58447c5-g9n9r\" (UID: \"f9f70b91-3596-4b3a-92b7-38db144afae1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.485932 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vql78\" (UniqueName: \"kubernetes.io/projected/68bcfa84-c19a-4686-b103-3164e0733af1-kube-api-access-vql78\") pod \"manila-operator-controller-manager-78c6999f6f-plttm\" (UID: \"68bcfa84-c19a-4686-b103-3164e0733af1\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.493714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ksj\" (UniqueName: \"kubernetes.io/projected/0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b-kube-api-access-x8ksj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fq777\" (UID: \"0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.501111 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.523122 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.550758 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.557893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cxp\" (UniqueName: \"kubernetes.io/projected/21f4e075-c740-4e05-a70c-d5e8a14acd45-kube-api-access-m4cxp\") pod \"ovn-operator-controller-manager-6f75f45d54-786cc\" (UID: \"21f4e075-c740-4e05-a70c-d5e8a14acd45\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.557972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4tpv\" (UniqueName: \"kubernetes.io/projected/6bc80c1e-debd-4c6a-b45d-595c733af1ac-kube-api-access-v4tpv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.558063 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kc9\" (UniqueName: \"kubernetes.io/projected/7bcb0020-f358-4d29-8fb1-78c62d473485-kube-api-access-94kc9\") pod \"nova-operator-controller-manager-7bdb645866-6mfh4\" (UID: \"7bcb0020-f358-4d29-8fb1-78c62d473485\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.558141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.558171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dsmq\" (UniqueName: \"kubernetes.io/projected/de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3-kube-api-access-4dsmq\") pod \"octavia-operator-controller-manager-5f4cd88d46-b5zrs\" (UID: \"de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.558198 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdllk\" (UniqueName: \"kubernetes.io/projected/2b7b5c45-dace-452f-bb89-08c886ecfe35-kube-api-access-pdllk\") pod \"placement-operator-controller-manager-79d5ccc684-992bb\" (UID: \"2b7b5c45-dace-452f-bb89-08c886ecfe35\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.558223 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrtn\" (UniqueName: \"kubernetes.io/projected/ee3e8394-b329-49c2-bee1-eb0ba9d4f023-kube-api-access-pnrtn\") pod \"swift-operator-controller-manager-547cbdb99f-bm8c6\" (UID: \"ee3e8394-b329-49c2-bee1-eb0ba9d4f023\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.561514 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.564199 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mbc2n" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.592958 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.616502 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dsmq\" (UniqueName: \"kubernetes.io/projected/de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3-kube-api-access-4dsmq\") pod \"octavia-operator-controller-manager-5f4cd88d46-b5zrs\" (UID: \"de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.624302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kc9\" (UniqueName: \"kubernetes.io/projected/7bcb0020-f358-4d29-8fb1-78c62d473485-kube-api-access-94kc9\") pod \"nova-operator-controller-manager-7bdb645866-6mfh4\" (UID: \"7bcb0020-f358-4d29-8fb1-78c62d473485\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.636567 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.659085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.659134 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdllk\" (UniqueName: \"kubernetes.io/projected/2b7b5c45-dace-452f-bb89-08c886ecfe35-kube-api-access-pdllk\") pod \"placement-operator-controller-manager-79d5ccc684-992bb\" (UID: \"2b7b5c45-dace-452f-bb89-08c886ecfe35\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.659157 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrtn\" (UniqueName: \"kubernetes.io/projected/ee3e8394-b329-49c2-bee1-eb0ba9d4f023-kube-api-access-pnrtn\") pod \"swift-operator-controller-manager-547cbdb99f-bm8c6\" (UID: \"ee3e8394-b329-49c2-bee1-eb0ba9d4f023\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.659764 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj"] Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.659905 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.659955 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert podName:6bc80c1e-debd-4c6a-b45d-595c733af1ac nodeName:}" failed. No retries permitted until 2026-01-27 14:47:21.159933326 +0000 UTC m=+1096.836710791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854498hc" (UID: "6bc80c1e-debd-4c6a-b45d-595c733af1ac") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.660546 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cxp\" (UniqueName: \"kubernetes.io/projected/21f4e075-c740-4e05-a70c-d5e8a14acd45-kube-api-access-m4cxp\") pod \"ovn-operator-controller-manager-6f75f45d54-786cc\" (UID: \"21f4e075-c740-4e05-a70c-d5e8a14acd45\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.660670 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.660660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr78\" (UniqueName: \"kubernetes.io/projected/47971c2b-520a-4088-a172-cc689e975fb9-kube-api-access-4sr78\") pod \"telemetry-operator-controller-manager-85cd9769bb-4chkq\" (UID: \"47971c2b-520a-4088-a172-cc689e975fb9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.661535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4tpv\" (UniqueName: \"kubernetes.io/projected/6bc80c1e-debd-4c6a-b45d-595c733af1ac-kube-api-access-v4tpv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.664907 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tnt8k" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.666088 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.678263 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.688446 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.696000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrtn\" (UniqueName: \"kubernetes.io/projected/ee3e8394-b329-49c2-bee1-eb0ba9d4f023-kube-api-access-pnrtn\") pod \"swift-operator-controller-manager-547cbdb99f-bm8c6\" (UID: \"ee3e8394-b329-49c2-bee1-eb0ba9d4f023\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.696407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdllk\" (UniqueName: \"kubernetes.io/projected/2b7b5c45-dace-452f-bb89-08c886ecfe35-kube-api-access-pdllk\") pod \"placement-operator-controller-manager-79d5ccc684-992bb\" (UID: \"2b7b5c45-dace-452f-bb89-08c886ecfe35\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.700434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cxp\" (UniqueName: \"kubernetes.io/projected/21f4e075-c740-4e05-a70c-d5e8a14acd45-kube-api-access-m4cxp\") pod \"ovn-operator-controller-manager-6f75f45d54-786cc\" (UID: \"21f4e075-c740-4e05-a70c-d5e8a14acd45\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.709205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.710820 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.714891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.716780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4tpv\" (UniqueName: \"kubernetes.io/projected/6bc80c1e-debd-4c6a-b45d-595c733af1ac-kube-api-access-v4tpv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.763253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmp6x\" (UniqueName: \"kubernetes.io/projected/fdc4f026-5fd3-4519-8d47-aeede547de6d-kube-api-access-bmp6x\") pod \"test-operator-controller-manager-69797bbcbd-5v2tj\" (UID: \"fdc4f026-5fd3-4519-8d47-aeede547de6d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.804915 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sr78\" (UniqueName: \"kubernetes.io/projected/47971c2b-520a-4088-a172-cc689e975fb9-kube-api-access-4sr78\") pod \"telemetry-operator-controller-manager-85cd9769bb-4chkq\" (UID: \"47971c2b-520a-4088-a172-cc689e975fb9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.805165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.765848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.788692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.806196 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: E0127 14:47:20.806267 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert podName:095ba028-5504-4533-b759-edaa313a8e80 nodeName:}" failed. No retries permitted until 2026-01-27 14:47:21.80624089 +0000 UTC m=+1097.483018365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert") pod "infra-operator-controller-manager-694cf4f878-t9jb8" (UID: "095ba028-5504-4533-b759-edaa313a8e80") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.891504 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.893091 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.899150 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rgznl" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.910971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sr78\" (UniqueName: \"kubernetes.io/projected/47971c2b-520a-4088-a172-cc689e975fb9-kube-api-access-4sr78\") pod \"telemetry-operator-controller-manager-85cd9769bb-4chkq\" (UID: \"47971c2b-520a-4088-a172-cc689e975fb9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.916696 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmp6x\" (UniqueName: \"kubernetes.io/projected/fdc4f026-5fd3-4519-8d47-aeede547de6d-kube-api-access-bmp6x\") pod \"test-operator-controller-manager-69797bbcbd-5v2tj\" (UID: \"fdc4f026-5fd3-4519-8d47-aeede547de6d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.955682 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.957287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmp6x\" (UniqueName: \"kubernetes.io/projected/fdc4f026-5fd3-4519-8d47-aeede547de6d-kube-api-access-bmp6x\") pod \"test-operator-controller-manager-69797bbcbd-5v2tj\" (UID: \"fdc4f026-5fd3-4519-8d47-aeede547de6d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.973484 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx"] Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.981758 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.985135 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-s2nwv" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.985291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 14:47:20 crc kubenswrapper[4698]: I0127 14:47:20.985425 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.017987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfgl\" (UniqueName: \"kubernetes.io/projected/5dbd886b-472c-41c0-b779-652e4f3121fd-kube-api-access-vsfgl\") pod \"watcher-operator-controller-manager-65d56bd854-4kv98\" (UID: \"5dbd886b-472c-41c0-b779-652e4f3121fd\") " pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.034082 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.048318 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.049601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.055129 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bf9r2" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.060198 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.070407 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.089875 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.121689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfgl\" (UniqueName: \"kubernetes.io/projected/5dbd886b-472c-41c0-b779-652e4f3121fd-kube-api-access-vsfgl\") pod \"watcher-operator-controller-manager-65d56bd854-4kv98\" (UID: \"5dbd886b-472c-41c0-b779-652e4f3121fd\") " pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.121783 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.121857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.121912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ls5\" (UniqueName: \"kubernetes.io/projected/5a115396-53db-4c99-80f1-abb7aad7fde5-kube-api-access-66ls5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxcbl\" (UID: \"5a115396-53db-4c99-80f1-abb7aad7fde5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.122063 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swhw\" (UniqueName: \"kubernetes.io/projected/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-kube-api-access-4swhw\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.155668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfgl\" (UniqueName: \"kubernetes.io/projected/5dbd886b-472c-41c0-b779-652e4f3121fd-kube-api-access-vsfgl\") pod \"watcher-operator-controller-manager-65d56bd854-4kv98\" (UID: \"5dbd886b-472c-41c0-b779-652e4f3121fd\") " pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.194657 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.225088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swhw\" (UniqueName: \"kubernetes.io/projected/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-kube-api-access-4swhw\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.225158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.225223 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.225255 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.225281 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ls5\" (UniqueName: \"kubernetes.io/projected/5a115396-53db-4c99-80f1-abb7aad7fde5-kube-api-access-66ls5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxcbl\" (UID: \"5a115396-53db-4c99-80f1-abb7aad7fde5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225416 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225418 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225455 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225478 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:21.725464279 +0000 UTC m=+1097.402241734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225564 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert podName:6bc80c1e-debd-4c6a-b45d-595c733af1ac nodeName:}" failed. No retries permitted until 2026-01-27 14:47:22.225542451 +0000 UTC m=+1097.902319906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854498hc" (UID: "6bc80c1e-debd-4c6a-b45d-595c733af1ac") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.225578 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:21.725572111 +0000 UTC m=+1097.402349576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.244776 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swhw\" (UniqueName: \"kubernetes.io/projected/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-kube-api-access-4swhw\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.254155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ls5\" (UniqueName: \"kubernetes.io/projected/5a115396-53db-4c99-80f1-abb7aad7fde5-kube-api-access-66ls5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxcbl\" (UID: \"5a115396-53db-4c99-80f1-abb7aad7fde5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.336170 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.372654 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.380253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.394819 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z"] Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.408378 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl"] Jan 27 14:47:21 crc kubenswrapper[4698]: W0127 14:47:21.479974 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c2e67b_e60f_4e4f_8322_35cc46986b8c.slice/crio-3c6c3422a79402458236cdf7b371d26b91f99ab9cd8ddce5b9ac8c6c4dc15061 WatchSource:0}: Error finding container 3c6c3422a79402458236cdf7b371d26b91f99ab9cd8ddce5b9ac8c6c4dc15061: Status 404 returned error can't find the container with id 3c6c3422a79402458236cdf7b371d26b91f99ab9cd8ddce5b9ac8c6c4dc15061 Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.495620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" event={"ID":"cd843e79-28e5-483b-8368-b344b5fc42ed","Type":"ContainerStarted","Data":"a720e0c2d67fb23f05d02c437afadef4590add35d7200e9422ade52cc99c54a9"} Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.499010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" event={"ID":"d77b4eac-bd81-41be-8a8c-6cb9c61bd242","Type":"ContainerStarted","Data":"d16d5b9205482c805da7e2845e65d521faec43fffaa160bf41156fc55cc3d58f"} Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.502607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" event={"ID":"55c2e67b-e60f-4e4f-8322-35cc46986b8c","Type":"ContainerStarted","Data":"3c6c3422a79402458236cdf7b371d26b91f99ab9cd8ddce5b9ac8c6c4dc15061"} Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.629808 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w"] Jan 27 14:47:21 crc kubenswrapper[4698]: W0127 14:47:21.641867 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdf128da_b514_46a4_ba2a_488ed77088c0.slice/crio-30a0aa9fec46f1e890dc19155a267f57756dee4215e8c4dbde8794dd0d84d8be WatchSource:0}: Error finding container 30a0aa9fec46f1e890dc19155a267f57756dee4215e8c4dbde8794dd0d84d8be: Status 404 returned error can't find the container with id 30a0aa9fec46f1e890dc19155a267f57756dee4215e8c4dbde8794dd0d84d8be Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.736212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.736285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.736406 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.736508 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:22.736490739 +0000 UTC m=+1098.413268204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.736534 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.736621 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:22.736600432 +0000 UTC m=+1098.413377957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: I0127 14:47:21.838042 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.838190 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:21 crc kubenswrapper[4698]: E0127 14:47:21.838236 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert podName:095ba028-5504-4533-b759-edaa313a8e80 nodeName:}" failed. No retries permitted until 2026-01-27 14:47:23.838222052 +0000 UTC m=+1099.514999517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert") pod "infra-operator-controller-manager-694cf4f878-t9jb8" (UID: "095ba028-5504-4533-b759-edaa313a8e80") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.084935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x"] Jan 27 14:47:22 crc kubenswrapper[4698]: W0127 14:47:22.088207 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e6b7df_451a_421d_9128_a73ee95124ca.slice/crio-da066135db0153cae912febb9f07c3e3187b347693e76899f4debc1fb7723990 WatchSource:0}: Error finding container da066135db0153cae912febb9f07c3e3187b347693e76899f4debc1fb7723990: Status 404 returned error can't find the container with id da066135db0153cae912febb9f07c3e3187b347693e76899f4debc1fb7723990 Jan 27 14:47:22 crc kubenswrapper[4698]: W0127 14:47:22.088821 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bcb0020_f358_4d29_8fb1_78c62d473485.slice/crio-3bfe1f05f6a32ec34c2f25d30f5a354f23e75f04c38c1f79b5b247213f03e508 WatchSource:0}: Error finding container 3bfe1f05f6a32ec34c2f25d30f5a354f23e75f04c38c1f79b5b247213f03e508: Status 404 returned error can't find the container with id 3bfe1f05f6a32ec34c2f25d30f5a354f23e75f04c38c1f79b5b247213f03e508 Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.091601 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.093152 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.131931 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6"] Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.184880 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8ksj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-fq777_openstack-operators(0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.185398 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66ls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zxcbl_openstack-operators(5a115396-53db-4c99-80f1-abb7aad7fde5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.185436 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dsmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-b5zrs_openstack-operators(de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.186055 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" podUID="0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.187132 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77"] Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.187918 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" podUID="de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.188326 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bmp6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-5v2tj_openstack-operators(fdc4f026-5fd3-4519-8d47-aeede547de6d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.188447 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-4chkq_openstack-operators(47971c2b-520a-4088-a172-cc689e975fb9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.188579 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" podUID="5a115396-53db-4c99-80f1-abb7aad7fde5" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.189513 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" podUID="47971c2b-520a-4088-a172-cc689e975fb9" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.189579 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" podUID="fdc4f026-5fd3-4519-8d47-aeede547de6d" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.191361 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.111:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsfgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-65d56bd854-4kv98_openstack-operators(5dbd886b-472c-41c0-b779-652e4f3121fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.194289 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" podUID="5dbd886b-472c-41c0-b779-652e4f3121fd" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.195121 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.203338 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.212208 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.219511 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.225768 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.231344 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.250588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.251057 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.251228 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert podName:6bc80c1e-debd-4c6a-b45d-595c733af1ac nodeName:}" failed. No retries permitted until 2026-01-27 14:47:24.251209795 +0000 UTC m=+1099.927987260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854498hc" (UID: "6bc80c1e-debd-4c6a-b45d-595c733af1ac") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.252717 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.263542 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.268599 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.273687 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.278252 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98"] Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.510196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" event={"ID":"fdf128da-b514-46a4-ba2a-488ed77088c0","Type":"ContainerStarted","Data":"30a0aa9fec46f1e890dc19155a267f57756dee4215e8c4dbde8794dd0d84d8be"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.511278 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" event={"ID":"21f4e075-c740-4e05-a70c-d5e8a14acd45","Type":"ContainerStarted","Data":"816701934d11d1bf9d2be07b0bd9ffd66e23f8b4bf114b1595ac9ad9bec21e74"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.512011 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" event={"ID":"5a115396-53db-4c99-80f1-abb7aad7fde5","Type":"ContainerStarted","Data":"b6fb71783ec1bb78f45001dbe3afd2ef9cae0218d8686cc596bae1c9477d6cc5"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.512976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" event={"ID":"fdc4f026-5fd3-4519-8d47-aeede547de6d","Type":"ContainerStarted","Data":"e55a6275ab00aeaf2da3d382830866505e51a3b691498a58eee19f13d7b66f6f"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.513270 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" podUID="5a115396-53db-4c99-80f1-abb7aad7fde5" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.514217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" event={"ID":"2b7b5c45-dace-452f-bb89-08c886ecfe35","Type":"ContainerStarted","Data":"2d405d40539508530c9ca99d179ff0b29eab774c74ca813468ded9318ae3adeb"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.515377 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" podUID="fdc4f026-5fd3-4519-8d47-aeede547de6d" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.516008 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" event={"ID":"0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b","Type":"ContainerStarted","Data":"3e556dbcabb26c1e2a126fa42363996f8424377ea50f3c37389d26ea3ac93b35"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.517030 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" podUID="0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.517326 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" event={"ID":"de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3","Type":"ContainerStarted","Data":"d0acd2c0b0fb8666c5d7cd4ba94a91810ace3e54d5cc4fa69c98dcd7ccecdb59"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.518244 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" podUID="de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.518654 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" event={"ID":"00d7ded4-a39f-4261-8f42-5762a7d28314","Type":"ContainerStarted","Data":"bc1fa00a5106f478b890edc80b967eddfa7932a28215b4060c295af06b427e72"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.519659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" event={"ID":"68bcfa84-c19a-4686-b103-3164e0733af1","Type":"ContainerStarted","Data":"ada97656965fc15c8c8475e82715ec9b1bb307542c987f6cdb41fb73fa79d356"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.520891 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" event={"ID":"a037e7f8-75bb-4a3a-a60e-e378b79e7a2c","Type":"ContainerStarted","Data":"db128cfa715a5f376674b037724d0157d6588ce5014d4b56b48c68cd5e3084b0"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.522088 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" event={"ID":"7bcb0020-f358-4d29-8fb1-78c62d473485","Type":"ContainerStarted","Data":"3bfe1f05f6a32ec34c2f25d30f5a354f23e75f04c38c1f79b5b247213f03e508"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.523381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" event={"ID":"ee3e8394-b329-49c2-bee1-eb0ba9d4f023","Type":"ContainerStarted","Data":"acf999c4695de5839e4858e7877cb67f1e18f52dc67b9d8efcf31074ed239394"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.524766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" event={"ID":"84e6b7df-451a-421d-9128-a73ee95124ca","Type":"ContainerStarted","Data":"da066135db0153cae912febb9f07c3e3187b347693e76899f4debc1fb7723990"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.526158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" event={"ID":"5dbd886b-472c-41c0-b779-652e4f3121fd","Type":"ContainerStarted","Data":"f1aaa6f219e634a8812177e71606b4ed4a1d22cefe3af60e58abfcc2e1b06bef"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.527889 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" podUID="5dbd886b-472c-41c0-b779-652e4f3121fd" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.528183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" event={"ID":"47971c2b-520a-4088-a172-cc689e975fb9","Type":"ContainerStarted","Data":"8c673c80b70721ccaa8236c370b14b14aba1516d11f21ccd74b1775f31919915"} Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.530832 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" podUID="47971c2b-520a-4088-a172-cc689e975fb9" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.544479 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" event={"ID":"3f9d43a0-d759-4627-9ac2-d48d281e6daf","Type":"ContainerStarted","Data":"1c9fbf5832e50bba25f5997926df9ef6628d053cc28729dd2fe3b9b945be2350"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.549748 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" event={"ID":"f9f70b91-3596-4b3a-92b7-38db144afae1","Type":"ContainerStarted","Data":"3164c9cfbf9034d6217036699fed7dab107f39445511fb53eed132aeeeb52bac"} Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.763338 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:22 crc kubenswrapper[4698]: I0127 14:47:22.763419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.763576 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.763669 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:24.763650703 +0000 UTC m=+1100.440428168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.764344 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:22 crc kubenswrapper[4698]: E0127 14:47:22.764390 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:24.764378542 +0000 UTC m=+1100.441156007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.556781 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" podUID="de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.556951 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" podUID="fdc4f026-5fd3-4519-8d47-aeede547de6d" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.557058 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" podUID="0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.557187 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" podUID="5a115396-53db-4c99-80f1-abb7aad7fde5" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.557299 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" podUID="5dbd886b-472c-41c0-b779-652e4f3121fd" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.557505 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" podUID="47971c2b-520a-4088-a172-cc689e975fb9" Jan 27 14:47:23 crc kubenswrapper[4698]: I0127 14:47:23.879920 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.880084 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:23 crc kubenswrapper[4698]: E0127 14:47:23.880169 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert podName:095ba028-5504-4533-b759-edaa313a8e80 nodeName:}" failed. No retries permitted until 2026-01-27 14:47:27.880149146 +0000 UTC m=+1103.556926611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert") pod "infra-operator-controller-manager-694cf4f878-t9jb8" (UID: "095ba028-5504-4533-b759-edaa313a8e80") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: I0127 14:47:24.286060 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.286397 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.286523 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert podName:6bc80c1e-debd-4c6a-b45d-595c733af1ac nodeName:}" failed. No retries permitted until 2026-01-27 14:47:28.286493274 +0000 UTC m=+1103.963270769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854498hc" (UID: "6bc80c1e-debd-4c6a-b45d-595c733af1ac") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: I0127 14:47:24.795001 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:24 crc kubenswrapper[4698]: I0127 14:47:24.795142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.795170 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.795233 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:28.795215794 +0000 UTC m=+1104.471993249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.795258 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:24 crc kubenswrapper[4698]: E0127 14:47:24.795305 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:28.795291336 +0000 UTC m=+1104.472068801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:27 crc kubenswrapper[4698]: I0127 14:47:27.951198 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:27 crc kubenswrapper[4698]: E0127 14:47:27.951368 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:27 crc kubenswrapper[4698]: E0127 14:47:27.951857 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert podName:095ba028-5504-4533-b759-edaa313a8e80 nodeName:}" failed. No retries permitted until 2026-01-27 14:47:35.951836382 +0000 UTC m=+1111.628613847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert") pod "infra-operator-controller-manager-694cf4f878-t9jb8" (UID: "095ba028-5504-4533-b759-edaa313a8e80") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: I0127 14:47:28.358413 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.358605 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.358944 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert podName:6bc80c1e-debd-4c6a-b45d-595c733af1ac nodeName:}" failed. No retries permitted until 2026-01-27 14:47:36.35892192 +0000 UTC m=+1112.035699385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854498hc" (UID: "6bc80c1e-debd-4c6a-b45d-595c733af1ac") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: I0127 14:47:28.865089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:28 crc kubenswrapper[4698]: I0127 14:47:28.865484 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.865854 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.865959 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:36.865936795 +0000 UTC m=+1112.542714300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.866595 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:28 crc kubenswrapper[4698]: E0127 14:47:28.866670 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:36.866656084 +0000 UTC m=+1112.543433549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:35.999561 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.009783 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/095ba028-5504-4533-b759-edaa313a8e80-cert\") pod \"infra-operator-controller-manager-694cf4f878-t9jb8\" (UID: \"095ba028-5504-4533-b759-edaa313a8e80\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.035197 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.405802 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.409708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bc80c1e-debd-4c6a-b45d-595c733af1ac-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854498hc\" (UID: \"6bc80c1e-debd-4c6a-b45d-595c733af1ac\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.638581 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.913521 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:36 crc kubenswrapper[4698]: I0127 14:47:36.913924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:36 crc kubenswrapper[4698]: E0127 14:47:36.913746 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:47:36 crc kubenswrapper[4698]: E0127 14:47:36.914039 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:52.914020654 +0000 UTC m=+1128.590798119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "webhook-server-cert" not found Jan 27 14:47:36 crc kubenswrapper[4698]: E0127 14:47:36.914101 4698 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:47:36 crc kubenswrapper[4698]: E0127 14:47:36.914149 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs podName:86a6bede-0b85-4f92-8e96-5c7c04e5e8dd nodeName:}" failed. No retries permitted until 2026-01-27 14:47:52.914134807 +0000 UTC m=+1128.590912272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs") pod "openstack-operator-controller-manager-7bfbd85685-ckqkx" (UID: "86a6bede-0b85-4f92-8e96-5c7c04e5e8dd") : secret "metrics-server-cert" not found Jan 27 14:47:41 crc kubenswrapper[4698]: E0127 14:47:41.921716 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 27 14:47:41 crc kubenswrapper[4698]: E0127 14:47:41.922404 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vql78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-plttm_openstack-operators(68bcfa84-c19a-4686-b103-3164e0733af1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:47:41 crc kubenswrapper[4698]: E0127 14:47:41.923631 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" podUID="68bcfa84-c19a-4686-b103-3164e0733af1" Jan 27 14:47:42 crc kubenswrapper[4698]: E0127 14:47:42.487947 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 27 14:47:42 crc kubenswrapper[4698]: E0127 14:47:42.488180 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fczqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-rwj7x_openstack-operators(84e6b7df-451a-421d-9128-a73ee95124ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:47:42 crc kubenswrapper[4698]: E0127 14:47:42.489402 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" podUID="84e6b7df-451a-421d-9128-a73ee95124ca" Jan 27 14:47:42 crc kubenswrapper[4698]: E0127 14:47:42.683394 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" podUID="68bcfa84-c19a-4686-b103-3164e0733af1" Jan 27 14:47:42 crc kubenswrapper[4698]: E0127 14:47:42.683517 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" podUID="84e6b7df-451a-421d-9128-a73ee95124ca" Jan 27 14:47:43 crc kubenswrapper[4698]: E0127 14:47:43.021576 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 14:47:43 crc kubenswrapper[4698]: E0127 14:47:43.021762 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnrtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-bm8c6_openstack-operators(ee3e8394-b329-49c2-bee1-eb0ba9d4f023): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:47:43 crc kubenswrapper[4698]: E0127 14:47:43.023300 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" podUID="ee3e8394-b329-49c2-bee1-eb0ba9d4f023" Jan 27 14:47:43 crc kubenswrapper[4698]: E0127 14:47:43.690440 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" podUID="ee3e8394-b329-49c2-bee1-eb0ba9d4f023" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.168701 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.168887 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9dxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-tv9vl_openstack-operators(55c2e67b-e60f-4e4f-8322-35cc46986b8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.170057 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" podUID="55c2e67b-e60f-4e4f-8322-35cc46986b8c" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.700210 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" podUID="55c2e67b-e60f-4e4f-8322-35cc46986b8c" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.864392 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.864595 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-94kc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-6mfh4_openstack-operators(7bcb0020-f358-4d29-8fb1-78c62d473485): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:47:44 crc kubenswrapper[4698]: E0127 14:47:44.865807 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" podUID="7bcb0020-f358-4d29-8fb1-78c62d473485" Jan 27 14:47:45 crc kubenswrapper[4698]: E0127 14:47:45.705673 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" podUID="7bcb0020-f358-4d29-8fb1-78c62d473485" Jan 27 14:47:50 crc kubenswrapper[4698]: I0127 14:47:50.378081 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc"] Jan 27 14:47:50 crc kubenswrapper[4698]: I0127 14:47:50.446488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8"] Jan 27 14:47:50 crc kubenswrapper[4698]: W0127 14:47:50.613461 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bc80c1e_debd_4c6a_b45d_595c733af1ac.slice/crio-0e91d37a6c4ab2756ba4c28fa6ddf2d2071e40de87a287ab35642d3e0c06d226 WatchSource:0}: Error finding container 0e91d37a6c4ab2756ba4c28fa6ddf2d2071e40de87a287ab35642d3e0c06d226: Status 404 returned error can't find the container with id 0e91d37a6c4ab2756ba4c28fa6ddf2d2071e40de87a287ab35642d3e0c06d226 Jan 27 14:47:50 crc kubenswrapper[4698]: W0127 14:47:50.614761 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod095ba028_5504_4533_b759_edaa313a8e80.slice/crio-f7cec4ebbc63f4445dbf72f9b9382ae01489012cbf0b93423c51b2c29086fd04 WatchSource:0}: Error finding container f7cec4ebbc63f4445dbf72f9b9382ae01489012cbf0b93423c51b2c29086fd04: Status 404 returned error can't find the container with id f7cec4ebbc63f4445dbf72f9b9382ae01489012cbf0b93423c51b2c29086fd04 Jan 27 14:47:50 crc kubenswrapper[4698]: I0127 14:47:50.741851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" event={"ID":"095ba028-5504-4533-b759-edaa313a8e80","Type":"ContainerStarted","Data":"f7cec4ebbc63f4445dbf72f9b9382ae01489012cbf0b93423c51b2c29086fd04"} Jan 27 14:47:50 crc kubenswrapper[4698]: I0127 14:47:50.744473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" event={"ID":"6bc80c1e-debd-4c6a-b45d-595c733af1ac","Type":"ContainerStarted","Data":"0e91d37a6c4ab2756ba4c28fa6ddf2d2071e40de87a287ab35642d3e0c06d226"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.761921 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" event={"ID":"fdf128da-b514-46a4-ba2a-488ed77088c0","Type":"ContainerStarted","Data":"da6482d89cd353a8727c65c9118364fd9cc64ceeb984b43691d1cf5539f94813"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.763215 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.764407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" event={"ID":"00d7ded4-a39f-4261-8f42-5762a7d28314","Type":"ContainerStarted","Data":"53eba6ae32bea89deefeaa9a94fb2f087ab5626a914ad52a4f136c6f6b938a81"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.764842 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.783260 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" event={"ID":"a037e7f8-75bb-4a3a-a60e-e378b79e7a2c","Type":"ContainerStarted","Data":"d04a7c6c96ff40a01141f73de86ef19d0428d44f6ab9bbf8454ad223e122f7f5"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.784138 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.791037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" event={"ID":"fdc4f026-5fd3-4519-8d47-aeede547de6d","Type":"ContainerStarted","Data":"61735fef07fd1fea607ae56ca38555a7b04132422c56e09192e92316910919f5"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.791838 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.812016 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" podStartSLOduration=9.857268316 podStartE2EDuration="32.811993432s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:21.64373441 +0000 UTC m=+1097.320511875" lastFinishedPulling="2026-01-27 14:47:44.598459526 +0000 UTC m=+1120.275236991" observedRunningTime="2026-01-27 14:47:51.804853944 +0000 UTC m=+1127.481631409" watchObservedRunningTime="2026-01-27 14:47:51.811993432 +0000 UTC m=+1127.488770897" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.816910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" event={"ID":"f9f70b91-3596-4b3a-92b7-38db144afae1","Type":"ContainerStarted","Data":"b7b45b33ec7440d4446e9adaf611c581885c63390cb945ede7d18b7967c5c31a"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.817622 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.834904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" event={"ID":"5a115396-53db-4c99-80f1-abb7aad7fde5","Type":"ContainerStarted","Data":"b6ecaded58c702c85cfb891a327b765b230080f84fb731f3d2924b2df29bc71b"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.853092 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" podStartSLOduration=9.742743657 podStartE2EDuration="32.853068911s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:21.488038949 +0000 UTC m=+1097.164816414" lastFinishedPulling="2026-01-27 14:47:44.598364203 +0000 UTC m=+1120.275141668" observedRunningTime="2026-01-27 14:47:51.842080452 +0000 UTC m=+1127.518857947" watchObservedRunningTime="2026-01-27 14:47:51.853068911 +0000 UTC m=+1127.529846376" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.853977 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" event={"ID":"2b7b5c45-dace-452f-bb89-08c886ecfe35","Type":"ContainerStarted","Data":"c946883c716148c996d59574dcfdcbcadfc86006fc5d9f8bc5a802287c425974"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.854777 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.878558 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" event={"ID":"5dbd886b-472c-41c0-b779-652e4f3121fd","Type":"ContainerStarted","Data":"c7f326c1fc17ff1da37936d8139dc0aeb3a91848702ad5a134f6a334c02d9413"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.879330 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.904983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" event={"ID":"cd843e79-28e5-483b-8368-b344b5fc42ed","Type":"ContainerStarted","Data":"8db583050eedb9732e38df94857abce8a4292244701a865998286cc4f329b8c0"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.915784 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" event={"ID":"0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b","Type":"ContainerStarted","Data":"78624c5f7ccca8539515000c6f361b228150799c607a2c1e8bebbd5d3a9fbb15"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.916564 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.925549 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" podStartSLOduration=10.476292064 podStartE2EDuration="32.925526935s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.149242985 +0000 UTC m=+1097.826020450" lastFinishedPulling="2026-01-27 14:47:44.598477846 +0000 UTC m=+1120.275255321" observedRunningTime="2026-01-27 14:47:51.88916161 +0000 UTC m=+1127.565939075" watchObservedRunningTime="2026-01-27 14:47:51.925526935 +0000 UTC m=+1127.602304400" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.939270 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" event={"ID":"21f4e075-c740-4e05-a70c-d5e8a14acd45","Type":"ContainerStarted","Data":"af2c7acf1436d8645289224cb6114743f045200d4d04aaa89be9cc3530662528"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.940091 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.948513 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" podStartSLOduration=8.057320983 podStartE2EDuration="31.948495379s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.184070421 +0000 UTC m=+1097.860847886" lastFinishedPulling="2026-01-27 14:47:46.075244817 +0000 UTC m=+1121.752022282" observedRunningTime="2026-01-27 14:47:51.94739615 +0000 UTC m=+1127.624173615" watchObservedRunningTime="2026-01-27 14:47:51.948495379 +0000 UTC m=+1127.625272844" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.952110 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" podStartSLOduration=3.371447656 podStartE2EDuration="31.952093464s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.18820908 +0000 UTC m=+1097.864986545" lastFinishedPulling="2026-01-27 14:47:50.768854878 +0000 UTC m=+1126.445632353" observedRunningTime="2026-01-27 14:47:51.924481718 +0000 UTC m=+1127.601259183" watchObservedRunningTime="2026-01-27 14:47:51.952093464 +0000 UTC m=+1127.628870929" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.963941 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" event={"ID":"d77b4eac-bd81-41be-8a8c-6cb9c61bd242","Type":"ContainerStarted","Data":"8aa9ebebd0789d8b14668036a0ea4e4f300a9c326cb1122af3884c4394c05c3a"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.964694 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.981444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" event={"ID":"de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3","Type":"ContainerStarted","Data":"e3bbf0624a8897933f7ac22a1ce84a89fca2b5b986a6780da3ff51d3faef5101"} Jan 27 14:47:51 crc kubenswrapper[4698]: I0127 14:47:51.982211 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.004210 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" podStartSLOduration=9.578928621 podStartE2EDuration="32.004186552s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.173089492 +0000 UTC m=+1097.849866947" lastFinishedPulling="2026-01-27 14:47:44.598347413 +0000 UTC m=+1120.275124878" observedRunningTime="2026-01-27 14:47:51.97785205 +0000 UTC m=+1127.654629515" watchObservedRunningTime="2026-01-27 14:47:52.004186552 +0000 UTC m=+1127.680964017" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.004810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" event={"ID":"47971c2b-520a-4088-a172-cc689e975fb9","Type":"ContainerStarted","Data":"de80827e3438173ee0519908be162438ae5d193054809d0d9b52913f58091282"} Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.005514 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.030926 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxcbl" podStartSLOduration=3.433261449 podStartE2EDuration="32.030907254s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.185256782 +0000 UTC m=+1097.862034247" lastFinishedPulling="2026-01-27 14:47:50.782902587 +0000 UTC m=+1126.459680052" observedRunningTime="2026-01-27 14:47:52.003824853 +0000 UTC m=+1127.680602328" watchObservedRunningTime="2026-01-27 14:47:52.030907254 +0000 UTC m=+1127.707684719" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.032528 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" event={"ID":"3f9d43a0-d759-4627-9ac2-d48d281e6daf","Type":"ContainerStarted","Data":"990f9fdb2e04cc099ac17afe3f47878c37ba88888d7f0c1ccac637308f786598"} Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.033726 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.037134 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" podStartSLOduration=3.549068544 podStartE2EDuration="32.037120748s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.184731189 +0000 UTC m=+1097.861508664" lastFinishedPulling="2026-01-27 14:47:50.672783403 +0000 UTC m=+1126.349560868" observedRunningTime="2026-01-27 14:47:52.020790119 +0000 UTC m=+1127.697567594" watchObservedRunningTime="2026-01-27 14:47:52.037120748 +0000 UTC m=+1127.713898223" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.064540 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" podStartSLOduration=3.438075456 podStartE2EDuration="32.064517758s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.191238589 +0000 UTC m=+1097.868016054" lastFinishedPulling="2026-01-27 14:47:50.817680891 +0000 UTC m=+1126.494458356" observedRunningTime="2026-01-27 14:47:52.051092956 +0000 UTC m=+1127.727870441" watchObservedRunningTime="2026-01-27 14:47:52.064517758 +0000 UTC m=+1127.741295223" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.081910 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" podStartSLOduration=9.519557621 podStartE2EDuration="33.081887155s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:21.036086501 +0000 UTC m=+1096.712863966" lastFinishedPulling="2026-01-27 14:47:44.598416035 +0000 UTC m=+1120.275193500" observedRunningTime="2026-01-27 14:47:52.080782846 +0000 UTC m=+1127.757560331" watchObservedRunningTime="2026-01-27 14:47:52.081887155 +0000 UTC m=+1127.758664620" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.128284 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" podStartSLOduration=3.703230666 podStartE2EDuration="32.128265904s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.184589495 +0000 UTC m=+1097.861366960" lastFinishedPulling="2026-01-27 14:47:50.609624733 +0000 UTC m=+1126.286402198" observedRunningTime="2026-01-27 14:47:52.119632207 +0000 UTC m=+1127.796409672" watchObservedRunningTime="2026-01-27 14:47:52.128265904 +0000 UTC m=+1127.805043369" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.153240 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" podStartSLOduration=9.696229806 podStartE2EDuration="32.15322136s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.141334898 +0000 UTC m=+1097.818112363" lastFinishedPulling="2026-01-27 14:47:44.598326452 +0000 UTC m=+1120.275103917" observedRunningTime="2026-01-27 14:47:52.148459234 +0000 UTC m=+1127.825236709" watchObservedRunningTime="2026-01-27 14:47:52.15322136 +0000 UTC m=+1127.829998825" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.181856 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" podStartSLOduration=3.6013487570000002 podStartE2EDuration="32.181835341s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.188386644 +0000 UTC m=+1097.865164109" lastFinishedPulling="2026-01-27 14:47:50.768873228 +0000 UTC m=+1126.445650693" observedRunningTime="2026-01-27 14:47:52.180004083 +0000 UTC m=+1127.856781558" watchObservedRunningTime="2026-01-27 14:47:52.181835341 +0000 UTC m=+1127.858612806" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.217682 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" podStartSLOduration=10.079719213 podStartE2EDuration="33.217621702s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:21.460426113 +0000 UTC m=+1097.137203578" lastFinishedPulling="2026-01-27 14:47:44.598328602 +0000 UTC m=+1120.275106067" observedRunningTime="2026-01-27 14:47:52.217015436 +0000 UTC m=+1127.893792911" watchObservedRunningTime="2026-01-27 14:47:52.217621702 +0000 UTC m=+1127.894399167" Jan 27 14:47:52 crc kubenswrapper[4698]: I0127 14:47:52.248421 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" podStartSLOduration=10.822857683 podStartE2EDuration="33.248400311s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.172931878 +0000 UTC m=+1097.849709343" lastFinishedPulling="2026-01-27 14:47:44.598474506 +0000 UTC m=+1120.275251971" observedRunningTime="2026-01-27 14:47:52.243510962 +0000 UTC m=+1127.920288437" watchObservedRunningTime="2026-01-27 14:47:52.248400311 +0000 UTC m=+1127.925177776" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.007362 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.007733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.027037 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-webhook-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.027952 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a6bede-0b85-4f92-8e96-5c7c04e5e8dd-metrics-certs\") pod \"openstack-operator-controller-manager-7bfbd85685-ckqkx\" (UID: \"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd\") " pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.043899 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.141852 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:53 crc kubenswrapper[4698]: I0127 14:47:53.678126 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx"] Jan 27 14:47:54 crc kubenswrapper[4698]: W0127 14:47:54.120557 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a6bede_0b85_4f92_8e96_5c7c04e5e8dd.slice/crio-cb61c530444d1d322c5ae0759bd38dd6faef01cc1774d5ec41f704232b626ec3 WatchSource:0}: Error finding container cb61c530444d1d322c5ae0759bd38dd6faef01cc1774d5ec41f704232b626ec3: Status 404 returned error can't find the container with id cb61c530444d1d322c5ae0759bd38dd6faef01cc1774d5ec41f704232b626ec3 Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.058180 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" event={"ID":"095ba028-5504-4533-b759-edaa313a8e80","Type":"ContainerStarted","Data":"2e4236bbcb11ae87154761259513f66c9c8c1ddf4d254cda324a32772df506f6"} Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.058512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.059555 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" event={"ID":"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd","Type":"ContainerStarted","Data":"120991fff5a3fb5d4c22f1b9056ff3b8aff3478fca10df42a807a0525714b3a6"} Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.059594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" event={"ID":"86a6bede-0b85-4f92-8e96-5c7c04e5e8dd","Type":"ContainerStarted","Data":"cb61c530444d1d322c5ae0759bd38dd6faef01cc1774d5ec41f704232b626ec3"} Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.059688 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.060575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" event={"ID":"6bc80c1e-debd-4c6a-b45d-595c733af1ac","Type":"ContainerStarted","Data":"fced9ed1e4518a20d6302d32632e3faea14e05dba5200f16131b0c889ee9e55d"} Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.060762 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.126295 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" podStartSLOduration=32.09693369 podStartE2EDuration="36.126279274s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:50.665364778 +0000 UTC m=+1126.342142243" lastFinishedPulling="2026-01-27 14:47:54.694710362 +0000 UTC m=+1130.371487827" observedRunningTime="2026-01-27 14:47:55.122469503 +0000 UTC m=+1130.799246968" watchObservedRunningTime="2026-01-27 14:47:55.126279274 +0000 UTC m=+1130.803056739" Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.162010 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" podStartSLOduration=35.161991903 podStartE2EDuration="35.161991903s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:47:55.152877533 +0000 UTC m=+1130.829655018" watchObservedRunningTime="2026-01-27 14:47:55.161991903 +0000 UTC m=+1130.838769368" Jan 27 14:47:55 crc kubenswrapper[4698]: I0127 14:47:55.183109 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" podStartSLOduration=31.153459547 podStartE2EDuration="35.183089477s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:50.665662456 +0000 UTC m=+1126.342439921" lastFinishedPulling="2026-01-27 14:47:54.695292386 +0000 UTC m=+1130.372069851" observedRunningTime="2026-01-27 14:47:55.181354481 +0000 UTC m=+1130.858131956" watchObservedRunningTime="2026-01-27 14:47:55.183089477 +0000 UTC m=+1130.859866942" Jan 27 14:47:56 crc kubenswrapper[4698]: I0127 14:47:56.067953 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" event={"ID":"68bcfa84-c19a-4686-b103-3164e0733af1","Type":"ContainerStarted","Data":"9a71ac7896b6df3945691f180ef5664ce1e7eb036b58ec51a9eb57bbe64086bf"} Jan 27 14:47:56 crc kubenswrapper[4698]: I0127 14:47:56.084526 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" podStartSLOduration=2.672655991 podStartE2EDuration="36.084505176s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.141486212 +0000 UTC m=+1097.818263677" lastFinishedPulling="2026-01-27 14:47:55.553335397 +0000 UTC m=+1131.230112862" observedRunningTime="2026-01-27 14:47:56.082490224 +0000 UTC m=+1131.759267689" watchObservedRunningTime="2026-01-27 14:47:56.084505176 +0000 UTC m=+1131.761282641" Jan 27 14:47:57 crc kubenswrapper[4698]: I0127 14:47:57.452717 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:47:57 crc kubenswrapper[4698]: I0127 14:47:57.452823 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.097440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" event={"ID":"ee3e8394-b329-49c2-bee1-eb0ba9d4f023","Type":"ContainerStarted","Data":"3d04428719db5c3370dcdf6c08df4656c2edea008afd536f12f073032eaa6811"} Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.097986 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.099585 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" event={"ID":"84e6b7df-451a-421d-9128-a73ee95124ca","Type":"ContainerStarted","Data":"203b9661b151d3230e65369ffed38960c162ff2f4eac944ba3e81ddf97b4734e"} Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.099853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.101220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" event={"ID":"55c2e67b-e60f-4e4f-8322-35cc46986b8c","Type":"ContainerStarted","Data":"69c05ccbff545a8b682116e96c91fd314b41113c390e2fc90075530aa9a5d3fe"} Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.101678 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.120073 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" podStartSLOduration=2.555784089 podStartE2EDuration="39.120052062s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.097048774 +0000 UTC m=+1097.773826239" lastFinishedPulling="2026-01-27 14:47:58.661316737 +0000 UTC m=+1134.338094212" observedRunningTime="2026-01-27 14:47:59.117578518 +0000 UTC m=+1134.794356003" watchObservedRunningTime="2026-01-27 14:47:59.120052062 +0000 UTC m=+1134.796829527" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.146342 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" podStartSLOduration=3.576652048 podStartE2EDuration="40.146307122s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.091363775 +0000 UTC m=+1097.768141240" lastFinishedPulling="2026-01-27 14:47:58.661018849 +0000 UTC m=+1134.337796314" observedRunningTime="2026-01-27 14:47:59.138277441 +0000 UTC m=+1134.815054906" watchObservedRunningTime="2026-01-27 14:47:59.146307122 +0000 UTC m=+1134.823084587" Jan 27 14:47:59 crc kubenswrapper[4698]: I0127 14:47:59.167426 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" podStartSLOduration=2.990819162 podStartE2EDuration="40.167400937s" podCreationTimestamp="2026-01-27 14:47:19 +0000 UTC" firstStartedPulling="2026-01-27 14:47:21.485125552 +0000 UTC m=+1097.161903017" lastFinishedPulling="2026-01-27 14:47:58.661707327 +0000 UTC m=+1134.338484792" observedRunningTime="2026-01-27 14:47:59.16257154 +0000 UTC m=+1134.839349005" watchObservedRunningTime="2026-01-27 14:47:59.167400937 +0000 UTC m=+1134.844178402" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.108565 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" event={"ID":"7bcb0020-f358-4d29-8fb1-78c62d473485","Type":"ContainerStarted","Data":"b56fa30ca1af8967fe0b8a3e64e9f74030c13789c0e4267c9422ddfec825ede3"} Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.109513 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.130802 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" podStartSLOduration=2.452447175 podStartE2EDuration="40.130779725s" podCreationTimestamp="2026-01-27 14:47:20 +0000 UTC" firstStartedPulling="2026-01-27 14:47:22.091400416 +0000 UTC m=+1097.768177881" lastFinishedPulling="2026-01-27 14:47:59.769732956 +0000 UTC m=+1135.446510431" observedRunningTime="2026-01-27 14:48:00.127791817 +0000 UTC m=+1135.804569282" watchObservedRunningTime="2026-01-27 14:48:00.130779725 +0000 UTC m=+1135.807557190" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.233794 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7sxf" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.285540 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-bfxhx" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.362333 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7b68z" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.470444 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-zjw77" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.490099 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-zz49w" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.555472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ppdfb" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.637810 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.639612 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-plttm" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.670748 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fq777" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.691109 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-g9n9r" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.719911 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b5zrs" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.733799 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-786cc" Jan 27 14:48:00 crc kubenswrapper[4698]: I0127 14:48:00.769203 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-992bb" Jan 27 14:48:01 crc kubenswrapper[4698]: I0127 14:48:01.075097 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-5v2tj" Jan 27 14:48:01 crc kubenswrapper[4698]: I0127 14:48:01.197521 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4chkq" Jan 27 14:48:01 crc kubenswrapper[4698]: I0127 14:48:01.340748 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-65d56bd854-4kv98" Jan 27 14:48:03 crc kubenswrapper[4698]: I0127 14:48:03.147844 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7bfbd85685-ckqkx" Jan 27 14:48:06 crc kubenswrapper[4698]: I0127 14:48:06.040894 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-t9jb8" Jan 27 14:48:06 crc kubenswrapper[4698]: I0127 14:48:06.646265 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854498hc" Jan 27 14:48:10 crc kubenswrapper[4698]: I0127 14:48:10.326937 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tv9vl" Jan 27 14:48:10 crc kubenswrapper[4698]: I0127 14:48:10.504356 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rwj7x" Jan 27 14:48:10 crc kubenswrapper[4698]: I0127 14:48:10.694247 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-6mfh4" Jan 27 14:48:10 crc kubenswrapper[4698]: I0127 14:48:10.791228 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bm8c6" Jan 27 14:48:27 crc kubenswrapper[4698]: I0127 14:48:27.451892 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:48:27 crc kubenswrapper[4698]: I0127 14:48:27.452684 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.685529 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.687171 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.692176 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xdlgj" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.692790 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.692933 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.693085 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.700179 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.738392 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.739807 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.742460 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.747449 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.785387 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.785449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9gq\" (UniqueName: \"kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.887384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.887455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.887512 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgdx\" (UniqueName: \"kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.887714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.887806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9gq\" (UniqueName: \"kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.889125 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.909655 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9gq\" (UniqueName: \"kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq\") pod \"dnsmasq-dns-6c598958d5-bn4jw\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.989097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.989813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.989971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgdx\" (UniqueName: \"kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.990104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:31 crc kubenswrapper[4698]: I0127 14:48:31.990691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:32 crc kubenswrapper[4698]: I0127 14:48:32.012536 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:48:32 crc kubenswrapper[4698]: I0127 14:48:32.013422 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgdx\" (UniqueName: \"kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx\") pod \"dnsmasq-dns-7f9cbd7fdf-b8wg6\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:32 crc kubenswrapper[4698]: I0127 14:48:32.062953 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:48:32 crc kubenswrapper[4698]: I0127 14:48:32.481094 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:48:32 crc kubenswrapper[4698]: I0127 14:48:32.531534 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:48:33 crc kubenswrapper[4698]: I0127 14:48:33.338596 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" event={"ID":"91906e64-eedf-41fc-9ef5-21c06b269c3d","Type":"ContainerStarted","Data":"dfbbd1524129f7ece688d948c9ece5292dc48d57795a95331dbd86af3d3735d3"} Jan 27 14:48:33 crc kubenswrapper[4698]: I0127 14:48:33.342125 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" event={"ID":"b531c374-5d26-4959-adc6-f03b56783b0c","Type":"ContainerStarted","Data":"f5a8d29d436c5913ee9ede95064f6337a6c56daa5bd75c6329740ad5e1ea7b0c"} Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.442874 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.468719 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.471373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.488215 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.571246 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.571315 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twptp\" (UniqueName: \"kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.571422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.673468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.673557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twptp\" (UniqueName: \"kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.673665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.675017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.675260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.724312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twptp\" (UniqueName: \"kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp\") pod \"dnsmasq-dns-64886f97d9-f2p4s\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.753432 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.791350 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.793236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.798238 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.806334 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.881077 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.882542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlx9\" (UniqueName: \"kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.882597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.983490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.983981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjlx9\" (UniqueName: \"kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.984131 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.984631 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:35 crc kubenswrapper[4698]: I0127 14:48:35.985151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.010761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjlx9\" (UniqueName: \"kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9\") pod \"dnsmasq-dns-59c95fdd89-9cn6z\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.070893 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.071387 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.100745 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.109062 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.122273 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.189625 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.189845 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.189907 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvzz\" (UniqueName: \"kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.290759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.290802 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlvzz\" (UniqueName: \"kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.290879 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.291690 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.292215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.317461 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlvzz\" (UniqueName: \"kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz\") pod \"dnsmasq-dns-6699786569-xgz55\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.432045 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.627534 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.628825 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.633212 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.633239 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.633370 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.633551 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m8gvx" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.639103 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.640004 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.640004 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.645808 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.696903 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvc26\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-kube-api-access-tvc26\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.696963 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.696983 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697033 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697162 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c686e168-f607-4b7f-a81d-f33ac8bdf513-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-config-data\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697692 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c686e168-f607-4b7f-a81d-f33ac8bdf513-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.697782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-config-data\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798915 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798938 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798963 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c686e168-f607-4b7f-a81d-f33ac8bdf513-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.798997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc26\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-kube-api-access-tvc26\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799076 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c686e168-f607-4b7f-a81d-f33ac8bdf513-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.799860 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-config-data\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.800601 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.801329 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c686e168-f607-4b7f-a81d-f33ac8bdf513-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.801784 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.802173 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.805464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.811385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c686e168-f607-4b7f-a81d-f33ac8bdf513-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.837245 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c686e168-f607-4b7f-a81d-f33ac8bdf513-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.837747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.838189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.852508 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc26\" (UniqueName: \"kubernetes.io/projected/c686e168-f607-4b7f-a81d-f33ac8bdf513-kube-api-access-tvc26\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.870965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c686e168-f607-4b7f-a81d-f33ac8bdf513\") " pod="openstack/rabbitmq-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.919773 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.921060 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.934763 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935061 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935234 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935393 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935519 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935726 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-blqsv" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.935884 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.946164 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:48:36 crc kubenswrapper[4698]: I0127 14:48:36.965094 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.008809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgjv\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-kube-api-access-8bgjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/764b6b7b-3664-40e6-a24b-dc0f9db827db-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009188 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009208 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009254 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009320 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009359 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/764b6b7b-3664-40e6-a24b-dc0f9db827db-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009376 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.009392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111235 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/764b6b7b-3664-40e6-a24b-dc0f9db827db-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111299 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111351 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgjv\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-kube-api-access-8bgjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/764b6b7b-3664-40e6-a24b-dc0f9db827db-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111406 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.111992 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.114458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.114536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.115193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/764b6b7b-3664-40e6-a24b-dc0f9db827db-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.115357 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.116931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/764b6b7b-3664-40e6-a24b-dc0f9db827db-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.117070 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.118378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/764b6b7b-3664-40e6-a24b-dc0f9db827db-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.119926 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.130960 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgjv\" (UniqueName: \"kubernetes.io/projected/764b6b7b-3664-40e6-a24b-dc0f9db827db-kube-api-access-8bgjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.137479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"764b6b7b-3664-40e6-a24b-dc0f9db827db\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.224947 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.226931 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.229202 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237177 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237361 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237428 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237459 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237576 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-c4q6n" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.237906 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.241549 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.292985 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313586 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313683 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313760 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhnd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-kube-api-access-vwhnd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313816 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5d6f607c-3a31-4135-9eb4-3193e722d112-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5d6f607c-3a31-4135-9eb4-3193e722d112-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.313948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.416679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.416896 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.416930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.416952 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhnd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-kube-api-access-vwhnd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.416999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417018 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5d6f607c-3a31-4135-9eb4-3193e722d112-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5d6f607c-3a31-4135-9eb4-3193e722d112-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.417204 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.418315 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.418419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.419151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.419267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.419445 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.419704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5d6f607c-3a31-4135-9eb4-3193e722d112-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.426093 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.426101 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5d6f607c-3a31-4135-9eb4-3193e722d112-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.426494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5d6f607c-3a31-4135-9eb4-3193e722d112-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.426813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.440625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhnd\" (UniqueName: \"kubernetes.io/projected/5d6f607c-3a31-4135-9eb4-3193e722d112-kube-api-access-vwhnd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.452176 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5d6f607c-3a31-4135-9eb4-3193e722d112\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:37 crc kubenswrapper[4698]: I0127 14:48:37.560879 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.693876 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.695429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.700735 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ss84w" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.705365 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.705594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.706498 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.711774 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.712404 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737125 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737262 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737309 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d629h\" (UniqueName: \"kubernetes.io/projected/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kube-api-access-d629h\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737339 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.737496 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838793 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d629h\" (UniqueName: \"kubernetes.io/projected/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kube-api-access-d629h\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838826 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838846 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838925 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838951 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.838994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.839155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.839409 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.840014 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.840073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.840966 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9374a29e-348e-43ec-9321-b0a13aeb6c4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.843308 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.843592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374a29e-348e-43ec-9321-b0a13aeb6c4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.857402 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d629h\" (UniqueName: \"kubernetes.io/projected/9374a29e-348e-43ec-9321-b0a13aeb6c4b-kube-api-access-d629h\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:38 crc kubenswrapper[4698]: I0127 14:48:38.857566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"9374a29e-348e-43ec-9321-b0a13aeb6c4b\") " pod="openstack/openstack-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.017572 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.916486 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.917965 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.919767 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.920059 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4cx25" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.920620 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.920666 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.929374 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955396 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955455 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955484 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k7v\" (UniqueName: \"kubernetes.io/projected/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kube-api-access-q4k7v\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955685 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:39 crc kubenswrapper[4698]: I0127 14:48:39.955983 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057296 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057349 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057368 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4k7v\" (UniqueName: \"kubernetes.io/projected/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kube-api-access-q4k7v\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.057965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.058057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.058071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.058418 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.058883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.060122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.063880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.066178 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.083255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4k7v\" (UniqueName: \"kubernetes.io/projected/a30e344d-b5c4-40f6-8bdb-7af9c1df7449-kube-api-access-q4k7v\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.085957 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a30e344d-b5c4-40f6-8bdb-7af9c1df7449\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.209578 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.210529 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.213117 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.214073 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.215075 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nqnz4" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.223176 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.241080 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.261115 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-config-data\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.261466 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-combined-ca-bundle\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.261571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-memcached-tls-certs\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.261709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-kolla-config\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.261731 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7s2\" (UniqueName: \"kubernetes.io/projected/589d54fa-234d-41b2-b030-91101d03c978-kube-api-access-8c7s2\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.362917 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-memcached-tls-certs\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.362991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-kolla-config\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.363017 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7s2\" (UniqueName: \"kubernetes.io/projected/589d54fa-234d-41b2-b030-91101d03c978-kube-api-access-8c7s2\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.363128 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-config-data\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.363160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-combined-ca-bundle\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.364609 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-config-data\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.364706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/589d54fa-234d-41b2-b030-91101d03c978-kolla-config\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.367031 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-combined-ca-bundle\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.367031 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/589d54fa-234d-41b2-b030-91101d03c978-memcached-tls-certs\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.388170 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7s2\" (UniqueName: \"kubernetes.io/projected/589d54fa-234d-41b2-b030-91101d03c978-kube-api-access-8c7s2\") pod \"memcached-0\" (UID: \"589d54fa-234d-41b2-b030-91101d03c978\") " pod="openstack/memcached-0" Jan 27 14:48:40 crc kubenswrapper[4698]: I0127 14:48:40.540977 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:48:41 crc kubenswrapper[4698]: I0127 14:48:41.996979 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.000601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.011822 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-w4lgz" Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.022208 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.090407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7c48\" (UniqueName: \"kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48\") pod \"kube-state-metrics-0\" (UID: \"5d70f810-b592-4abf-b587-4ff75b743944\") " pod="openstack/kube-state-metrics-0" Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.191792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7c48\" (UniqueName: \"kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48\") pod \"kube-state-metrics-0\" (UID: \"5d70f810-b592-4abf-b587-4ff75b743944\") " pod="openstack/kube-state-metrics-0" Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.245761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7c48\" (UniqueName: \"kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48\") pod \"kube-state-metrics-0\" (UID: \"5d70f810-b592-4abf-b587-4ff75b743944\") " pod="openstack/kube-state-metrics-0" Jan 27 14:48:42 crc kubenswrapper[4698]: I0127 14:48:42.333784 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.409136 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.411498 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.414703 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.414759 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.414770 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.415012 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.415599 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.417030 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tg7cd" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.418023 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.425321 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.427234 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.512706 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.512840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.512878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxgk\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513182 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.513360 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615133 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxgk\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615233 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615266 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615317 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615395 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.615946 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.616007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.616448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.616449 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.616727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.618997 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.619188 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.619224 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/416585bac8fd590967dd124189b0e9e15cec9d1b1d071795cce77f5a8944215e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.619615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.619761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.620293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.633665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxgk\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:43 crc kubenswrapper[4698]: I0127 14:48:43.646071 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.655236 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7swbz"] Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.658984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.661523 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.661841 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.662098 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-s6cq7" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.676776 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cn5z6"] Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.683177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.690928 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz"] Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.706168 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cn5z6"] Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849200 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkhp\" (UniqueName: \"kubernetes.io/projected/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-kube-api-access-xmkhp\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-etc-ovs\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-combined-ca-bundle\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-ovn-controller-tls-certs\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-run\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-log\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849877 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wv92\" (UniqueName: \"kubernetes.io/projected/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-kube-api-access-9wv92\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849907 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-scripts\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.849980 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-scripts\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.850034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-lib\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.850112 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-log-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.951941 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-log-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkhp\" (UniqueName: \"kubernetes.io/projected/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-kube-api-access-xmkhp\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952044 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-etc-ovs\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-combined-ca-bundle\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-ovn-controller-tls-certs\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952136 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-run\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-log\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952197 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wv92\" (UniqueName: \"kubernetes.io/projected/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-kube-api-access-9wv92\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-scripts\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-scripts\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.952278 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-lib\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.953680 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-etc-ovs\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954039 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-log-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-run\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954176 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-lib\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954279 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-var-run-ovn\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.954309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-var-log\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.955076 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-scripts\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.957083 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-ovn-controller-tls-certs\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.957501 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-combined-ca-bundle\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.968353 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkhp\" (UniqueName: \"kubernetes.io/projected/ca935cab-9c0b-4b5c-9754-f5bafb3a0037-kube-api-access-xmkhp\") pod \"ovn-controller-ovs-cn5z6\" (UID: \"ca935cab-9c0b-4b5c-9754-f5bafb3a0037\") " pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:45 crc kubenswrapper[4698]: I0127 14:48:45.972375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wv92\" (UniqueName: \"kubernetes.io/projected/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-kube-api-access-9wv92\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:46 crc kubenswrapper[4698]: I0127 14:48:46.018479 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:48:46 crc kubenswrapper[4698]: I0127 14:48:46.330969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b-scripts\") pod \"ovn-controller-7swbz\" (UID: \"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b\") " pod="openstack/ovn-controller-7swbz" Jan 27 14:48:46 crc kubenswrapper[4698]: I0127 14:48:46.334036 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config\") pod \"prometheus-metric-storage-0\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:46 crc kubenswrapper[4698]: I0127 14:48:46.429505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:48:46 crc kubenswrapper[4698]: I0127 14:48:46.605045 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.415007 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.419714 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.422741 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.422772 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.422916 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rgl99" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.422949 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.422999 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.432751 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615380 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615467 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615489 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntj9l\" (UniqueName: \"kubernetes.io/projected/35c385ce-4a6e-4a66-b607-89f47e40b6fc-kube-api-access-ntj9l\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.615946 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.616060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.616133 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-config\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.627484 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.629011 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.632004 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7htxh" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.635816 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.636273 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.636413 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.650586 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.717558 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718111 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718207 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-config\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718721 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntj9l\" (UniqueName: \"kubernetes.io/projected/35c385ce-4a6e-4a66-b607-89f47e40b6fc-kube-api-access-ntj9l\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.718970 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.719097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.719195 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.719289 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.719888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-config\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.719977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35c385ce-4a6e-4a66-b607-89f47e40b6fc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.723618 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.725128 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.726270 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c385ce-4a6e-4a66-b607-89f47e40b6fc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.739934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntj9l\" (UniqueName: \"kubernetes.io/projected/35c385ce-4a6e-4a66-b607-89f47e40b6fc-kube-api-access-ntj9l\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.764074 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"35c385ce-4a6e-4a66-b607-89f47e40b6fc\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.780035 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833397 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833424 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833453 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833485 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.833567 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrm4f\" (UniqueName: \"kubernetes.io/projected/74651d0e-02c7-4067-9fc1-eff4c90d33ac-kube-api-access-vrm4f\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934454 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrm4f\" (UniqueName: \"kubernetes.io/projected/74651d0e-02c7-4067-9fc1-eff4c90d33ac-kube-api-access-vrm4f\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934563 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934743 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.934825 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.935682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.936105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.936214 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74651d0e-02c7-4067-9fc1-eff4c90d33ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.936314 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.936865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.938536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.938713 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.943434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74651d0e-02c7-4067-9fc1-eff4c90d33ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.953564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrm4f\" (UniqueName: \"kubernetes.io/projected/74651d0e-02c7-4067-9fc1-eff4c90d33ac-kube-api-access-vrm4f\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:49 crc kubenswrapper[4698]: I0127 14:48:49.960379 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74651d0e-02c7-4067-9fc1-eff4c90d33ac\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:50 crc kubenswrapper[4698]: I0127 14:48:50.259541 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:48:52 crc kubenswrapper[4698]: I0127 14:48:52.478867 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:48:57 crc kubenswrapper[4698]: I0127 14:48:57.452191 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:48:57 crc kubenswrapper[4698]: I0127 14:48:57.453621 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:48:57 crc kubenswrapper[4698]: I0127 14:48:57.453742 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:48:57 crc kubenswrapper[4698]: I0127 14:48:57.454627 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:48:57 crc kubenswrapper[4698]: I0127 14:48:57.454794 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97" gracePeriod=600 Jan 27 14:48:58 crc kubenswrapper[4698]: I0127 14:48:58.534864 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97" exitCode=0 Jan 27 14:48:58 crc kubenswrapper[4698]: I0127 14:48:58.534922 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97"} Jan 27 14:48:58 crc kubenswrapper[4698]: I0127 14:48:58.534981 4698 scope.go:117] "RemoveContainer" containerID="ebfd43abe434a69d79a515882ba43f2e73b9ebc9b44891f2eec4f138ba47c9b0" Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.296555 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz"] Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.389249 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.397976 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.404807 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.509035 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:49:00 crc kubenswrapper[4698]: I0127 14:49:00.550468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" event={"ID":"2def0e4c-a384-453a-b67c-28004f50c9c6","Type":"ContainerStarted","Data":"a59b02432d36996e881f575178661d93cacba1696ef230402ca218c731dde455"} Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.659705 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.659756 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.659873 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n9gq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6c598958d5-bn4jw_openstack(91906e64-eedf-41fc-9ef5-21c06b269c3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.663206 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" podUID="91906e64-eedf-41fc-9ef5-21c06b269c3d" Jan 27 14:49:00 crc kubenswrapper[4698]: W0127 14:49:00.683926 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda30e344d_b5c4_40f6_8bdb_7af9c1df7449.slice/crio-daa95e282ff1a00d3da13451c45c4daa4645c384c53522f6b3c5b7ca8ad3a3e2 WatchSource:0}: Error finding container daa95e282ff1a00d3da13451c45c4daa4645c384c53522f6b3c5b7ca8ad3a3e2: Status 404 returned error can't find the container with id daa95e282ff1a00d3da13451c45c4daa4645c384c53522f6b3c5b7ca8ad3a3e2 Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.726682 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.726727 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.726849 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtgdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f9cbd7fdf-b8wg6_openstack(b531c374-5d26-4959-adc6-f03b56783b0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:49:00 crc kubenswrapper[4698]: E0127 14:49:00.728475 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" podUID="b531c374-5d26-4959-adc6-f03b56783b0c" Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.201738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.226495 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.232648 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.562286 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9374a29e-348e-43ec-9321-b0a13aeb6c4b","Type":"ContainerStarted","Data":"9c1fa503cf943e256a6ffc870ead6f0086eb476b334765ea926707540323af34"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.565247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerStarted","Data":"4d2d4a587db2562d575f706211d8aad861ccf9d6087e984140b9ab1f919654f8"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.573891 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.577949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5d6f607c-3a31-4135-9eb4-3193e722d112","Type":"ContainerStarted","Data":"58edd98520896998d009d62e4259e72225743b3199cb5e081d3efde6a9583af4"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.600991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a30e344d-b5c4-40f6-8bdb-7af9c1df7449","Type":"ContainerStarted","Data":"daa95e282ff1a00d3da13451c45c4daa4645c384c53522f6b3c5b7ca8ad3a3e2"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.613462 4698 generic.go:334] "Generic (PLEG): container finished" podID="2def0e4c-a384-453a-b67c-28004f50c9c6" containerID="de370dfb3496d56e013764f63c01d064574be83226daec4f6b036d8409cf7448" exitCode=0 Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.613626 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" event={"ID":"2def0e4c-a384-453a-b67c-28004f50c9c6","Type":"ContainerDied","Data":"de370dfb3496d56e013764f63c01d064574be83226daec4f6b036d8409cf7448"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.627443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d70f810-b592-4abf-b587-4ff75b743944","Type":"ContainerStarted","Data":"0d24411367c4e28674967153a617fc7d0aa1187a94fbb1fe871c3cea7df3d590"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.633797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"589d54fa-234d-41b2-b030-91101d03c978","Type":"ContainerStarted","Data":"81aa03162e530f067261fd44e76918f69cb8852bb2726e649d176e910924ac72"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.640018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz" event={"ID":"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b","Type":"ContainerStarted","Data":"7c7454d49f4d1c8ab39966cdc8e9e2ade56028b308acf9b31d65399d06361b0f"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.642648 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.646711 4698 generic.go:334] "Generic (PLEG): container finished" podID="64738940-4d7e-484c-92f9-d6a686fd2696" containerID="a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884" exitCode=0 Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.647221 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" event={"ID":"64738940-4d7e-484c-92f9-d6a686fd2696","Type":"ContainerDied","Data":"a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884"} Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.647289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" event={"ID":"64738940-4d7e-484c-92f9-d6a686fd2696","Type":"ContainerStarted","Data":"3ba2a729f9ff6413edcd8122d546ff6989de69f82fe23deff6ccdec58dfe5e88"} Jan 27 14:49:01 crc kubenswrapper[4698]: W0127 14:49:01.660701 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e94ae22_f128_4a13_8c1e_7beadbfee471.slice/crio-cbe4988a8a633a3ebb6e5713339b5895abb1bc91a7870e11ea89c5344ac44633 WatchSource:0}: Error finding container cbe4988a8a633a3ebb6e5713339b5895abb1bc91a7870e11ea89c5344ac44633: Status 404 returned error can't find the container with id cbe4988a8a633a3ebb6e5713339b5895abb1bc91a7870e11ea89c5344ac44633 Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.674196 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.711431 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:49:01 crc kubenswrapper[4698]: I0127 14:49:01.793822 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:49:02 crc kubenswrapper[4698]: E0127 14:49:02.011157 4698 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 14:49:02 crc kubenswrapper[4698]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/64738940-4d7e-484c-92f9-d6a686fd2696/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 14:49:02 crc kubenswrapper[4698]: > podSandboxID="3ba2a729f9ff6413edcd8122d546ff6989de69f82fe23deff6ccdec58dfe5e88" Jan 27 14:49:02 crc kubenswrapper[4698]: E0127 14:49:02.011591 4698 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:49:02 crc kubenswrapper[4698]: container &Container{Name:dnsmasq-dns,Image:38.102.83.111:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twptp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-64886f97d9-f2p4s_openstack(64738940-4d7e-484c-92f9-d6a686fd2696): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/64738940-4d7e-484c-92f9-d6a686fd2696/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 14:49:02 crc kubenswrapper[4698]: > logger="UnhandledError" Jan 27 14:49:02 crc kubenswrapper[4698]: E0127 14:49:02.012975 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/64738940-4d7e-484c-92f9-d6a686fd2696/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.175623 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.207221 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.236306 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.365907 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config\") pod \"2def0e4c-a384-453a-b67c-28004f50c9c6\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.365980 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgdx\" (UniqueName: \"kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx\") pod \"b531c374-5d26-4959-adc6-f03b56783b0c\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366019 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config\") pod \"b531c374-5d26-4959-adc6-f03b56783b0c\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config\") pod \"91906e64-eedf-41fc-9ef5-21c06b269c3d\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc\") pod \"b531c374-5d26-4959-adc6-f03b56783b0c\" (UID: \"b531c374-5d26-4959-adc6-f03b56783b0c\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc\") pod \"2def0e4c-a384-453a-b67c-28004f50c9c6\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366283 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjlx9\" (UniqueName: \"kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9\") pod \"2def0e4c-a384-453a-b67c-28004f50c9c6\" (UID: \"2def0e4c-a384-453a-b67c-28004f50c9c6\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.366374 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n9gq\" (UniqueName: \"kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq\") pod \"91906e64-eedf-41fc-9ef5-21c06b269c3d\" (UID: \"91906e64-eedf-41fc-9ef5-21c06b269c3d\") " Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.367338 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config" (OuterVolumeSpecName: "config") pod "b531c374-5d26-4959-adc6-f03b56783b0c" (UID: "b531c374-5d26-4959-adc6-f03b56783b0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.369266 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config" (OuterVolumeSpecName: "config") pod "91906e64-eedf-41fc-9ef5-21c06b269c3d" (UID: "91906e64-eedf-41fc-9ef5-21c06b269c3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.369853 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b531c374-5d26-4959-adc6-f03b56783b0c" (UID: "b531c374-5d26-4959-adc6-f03b56783b0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.369989 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9" (OuterVolumeSpecName: "kube-api-access-gjlx9") pod "2def0e4c-a384-453a-b67c-28004f50c9c6" (UID: "2def0e4c-a384-453a-b67c-28004f50c9c6"). InnerVolumeSpecName "kube-api-access-gjlx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.370214 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq" (OuterVolumeSpecName: "kube-api-access-9n9gq") pod "91906e64-eedf-41fc-9ef5-21c06b269c3d" (UID: "91906e64-eedf-41fc-9ef5-21c06b269c3d"). InnerVolumeSpecName "kube-api-access-9n9gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.370779 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx" (OuterVolumeSpecName: "kube-api-access-qtgdx") pod "b531c374-5d26-4959-adc6-f03b56783b0c" (UID: "b531c374-5d26-4959-adc6-f03b56783b0c"). InnerVolumeSpecName "kube-api-access-qtgdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.390443 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config" (OuterVolumeSpecName: "config") pod "2def0e4c-a384-453a-b67c-28004f50c9c6" (UID: "2def0e4c-a384-453a-b67c-28004f50c9c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.391322 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2def0e4c-a384-453a-b67c-28004f50c9c6" (UID: "2def0e4c-a384-453a-b67c-28004f50c9c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.467921 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.467961 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtgdx\" (UniqueName: \"kubernetes.io/projected/b531c374-5d26-4959-adc6-f03b56783b0c-kube-api-access-qtgdx\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.467976 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.467987 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91906e64-eedf-41fc-9ef5-21c06b269c3d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.467998 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b531c374-5d26-4959-adc6-f03b56783b0c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.468007 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def0e4c-a384-453a-b67c-28004f50c9c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.468018 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjlx9\" (UniqueName: \"kubernetes.io/projected/2def0e4c-a384-453a-b67c-28004f50c9c6-kube-api-access-gjlx9\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.468029 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n9gq\" (UniqueName: \"kubernetes.io/projected/91906e64-eedf-41fc-9ef5-21c06b269c3d-kube-api-access-9n9gq\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.486931 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.599783 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cn5z6"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.656047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35c385ce-4a6e-4a66-b607-89f47e40b6fc","Type":"ContainerStarted","Data":"30ed9f500bb4422895e013ec0c72416829da371678232d5e6084f783159abddb"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.657851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" event={"ID":"91906e64-eedf-41fc-9ef5-21c06b269c3d","Type":"ContainerDied","Data":"dfbbd1524129f7ece688d948c9ece5292dc48d57795a95331dbd86af3d3735d3"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.657892 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c598958d5-bn4jw" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.659123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"764b6b7b-3664-40e6-a24b-dc0f9db827db","Type":"ContainerStarted","Data":"8393b9bca71dc744bd5d8047da580a7fdef11ba9a1dcd8947187b3634cd95d8d"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.665487 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c686e168-f607-4b7f-a81d-f33ac8bdf513","Type":"ContainerStarted","Data":"96b14172b80e4e893786cad925952012833df53e011932da80b8f09464ad5fb8"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.669057 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" event={"ID":"b531c374-5d26-4959-adc6-f03b56783b0c","Type":"ContainerDied","Data":"f5a8d29d436c5913ee9ede95064f6337a6c56daa5bd75c6329740ad5e1ea7b0c"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.669153 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.682840 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" event={"ID":"2def0e4c-a384-453a-b67c-28004f50c9c6","Type":"ContainerDied","Data":"a59b02432d36996e881f575178661d93cacba1696ef230402ca218c731dde455"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.682858 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c95fdd89-9cn6z" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.682905 4698 scope.go:117] "RemoveContainer" containerID="de370dfb3496d56e013764f63c01d064574be83226daec4f6b036d8409cf7448" Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.685782 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerID="a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362" exitCode=0 Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.687226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6699786569-xgz55" event={"ID":"4e94ae22-f128-4a13-8c1e-7beadbfee471","Type":"ContainerDied","Data":"a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.687262 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6699786569-xgz55" event={"ID":"4e94ae22-f128-4a13-8c1e-7beadbfee471","Type":"ContainerStarted","Data":"cbe4988a8a633a3ebb6e5713339b5895abb1bc91a7870e11ea89c5344ac44633"} Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.743103 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.749598 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c598958d5-bn4jw"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.839214 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.870330 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c95fdd89-9cn6z"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.904260 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:49:02 crc kubenswrapper[4698]: I0127 14:49:02.916107 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9cbd7fdf-b8wg6"] Jan 27 14:49:03 crc kubenswrapper[4698]: I0127 14:49:03.003139 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2def0e4c-a384-453a-b67c-28004f50c9c6" path="/var/lib/kubelet/pods/2def0e4c-a384-453a-b67c-28004f50c9c6/volumes" Jan 27 14:49:03 crc kubenswrapper[4698]: I0127 14:49:03.003733 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91906e64-eedf-41fc-9ef5-21c06b269c3d" path="/var/lib/kubelet/pods/91906e64-eedf-41fc-9ef5-21c06b269c3d/volumes" Jan 27 14:49:03 crc kubenswrapper[4698]: I0127 14:49:03.004191 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b531c374-5d26-4959-adc6-f03b56783b0c" path="/var/lib/kubelet/pods/b531c374-5d26-4959-adc6-f03b56783b0c/volumes" Jan 27 14:49:03 crc kubenswrapper[4698]: W0127 14:49:03.907938 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca935cab_9c0b_4b5c_9754_f5bafb3a0037.slice/crio-7deba953ae12bb44fdc94cb86e3ed86647f80b64d0f80e1a7301b4f565d8ff82 WatchSource:0}: Error finding container 7deba953ae12bb44fdc94cb86e3ed86647f80b64d0f80e1a7301b4f565d8ff82: Status 404 returned error can't find the container with id 7deba953ae12bb44fdc94cb86e3ed86647f80b64d0f80e1a7301b4f565d8ff82 Jan 27 14:49:03 crc kubenswrapper[4698]: W0127 14:49:03.908974 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74651d0e_02c7_4067_9fc1_eff4c90d33ac.slice/crio-db5b2158ab727bc9263c8e45346626e659e1a495ee54aafe9934c3048036592f WatchSource:0}: Error finding container db5b2158ab727bc9263c8e45346626e659e1a495ee54aafe9934c3048036592f: Status 404 returned error can't find the container with id db5b2158ab727bc9263c8e45346626e659e1a495ee54aafe9934c3048036592f Jan 27 14:49:04 crc kubenswrapper[4698]: I0127 14:49:04.708088 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cn5z6" event={"ID":"ca935cab-9c0b-4b5c-9754-f5bafb3a0037","Type":"ContainerStarted","Data":"7deba953ae12bb44fdc94cb86e3ed86647f80b64d0f80e1a7301b4f565d8ff82"} Jan 27 14:49:04 crc kubenswrapper[4698]: I0127 14:49:04.709213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74651d0e-02c7-4067-9fc1-eff4c90d33ac","Type":"ContainerStarted","Data":"db5b2158ab727bc9263c8e45346626e659e1a495ee54aafe9934c3048036592f"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.787419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"589d54fa-234d-41b2-b030-91101d03c978","Type":"ContainerStarted","Data":"98af0b5e06ead00e6aafcf335482114ee8de9d818098c6ff91159494283a4ba0"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.788089 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.792469 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" event={"ID":"64738940-4d7e-484c-92f9-d6a686fd2696","Type":"ContainerStarted","Data":"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.792615 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.795322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6699786569-xgz55" event={"ID":"4e94ae22-f128-4a13-8c1e-7beadbfee471","Type":"ContainerStarted","Data":"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.795738 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.798620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9374a29e-348e-43ec-9321-b0a13aeb6c4b","Type":"ContainerStarted","Data":"f6f7825cc551aa6513e87931bd6d22a1c931a2fd46b3d582eea5e25160a65401"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.805867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a30e344d-b5c4-40f6-8bdb-7af9c1df7449","Type":"ContainerStarted","Data":"d8b91a61006f4e6e0104a26d88356fcce75d71b7fb64aa0a5caef7fc0f355acb"} Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.808689 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.664380931 podStartE2EDuration="33.808661723s" podCreationTimestamp="2026-01-27 14:48:40 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.239745093 +0000 UTC m=+1196.916522558" lastFinishedPulling="2026-01-27 14:49:12.384025885 +0000 UTC m=+1208.060803350" observedRunningTime="2026-01-27 14:49:13.805528151 +0000 UTC m=+1209.482305616" watchObservedRunningTime="2026-01-27 14:49:13.808661723 +0000 UTC m=+1209.485439208" Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.853085 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" podStartSLOduration=38.603103501 podStartE2EDuration="38.853068863s" podCreationTimestamp="2026-01-27 14:48:35 +0000 UTC" firstStartedPulling="2026-01-27 14:49:00.751728292 +0000 UTC m=+1196.428505757" lastFinishedPulling="2026-01-27 14:49:01.001693654 +0000 UTC m=+1196.678471119" observedRunningTime="2026-01-27 14:49:13.848831761 +0000 UTC m=+1209.525609236" watchObservedRunningTime="2026-01-27 14:49:13.853068863 +0000 UTC m=+1209.529846328" Jan 27 14:49:13 crc kubenswrapper[4698]: I0127 14:49:13.864867 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6699786569-xgz55" podStartSLOduration=37.864848503 podStartE2EDuration="37.864848503s" podCreationTimestamp="2026-01-27 14:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:13.864150325 +0000 UTC m=+1209.540927790" watchObservedRunningTime="2026-01-27 14:49:13.864848503 +0000 UTC m=+1209.541625988" Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.810659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cn5z6" event={"ID":"ca935cab-9c0b-4b5c-9754-f5bafb3a0037","Type":"ContainerStarted","Data":"1a3f23602c31ef313b4c4a3025524920f5608961b5ce99bb0c358b8f01d5cbbf"} Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.814404 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"764b6b7b-3664-40e6-a24b-dc0f9db827db","Type":"ContainerStarted","Data":"581af515f0476829ce603fe1c8555dd8bb4e19b489fc159e1a8ed2f59811c5e5"} Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.816947 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74651d0e-02c7-4067-9fc1-eff4c90d33ac","Type":"ContainerStarted","Data":"24563f3e434326272d9fe4dc1e9b4d701eb26d956d39ecad167f595ae468ada9"} Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.819609 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz" event={"ID":"f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b","Type":"ContainerStarted","Data":"cce1c6036cd2d11648aef0e498929093404ea7e3a738016d2301a97c13c1e3cf"} Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.820087 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7swbz" Jan 27 14:49:14 crc kubenswrapper[4698]: I0127 14:49:14.884910 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7swbz" podStartSLOduration=17.594529912 podStartE2EDuration="29.884888476s" podCreationTimestamp="2026-01-27 14:48:45 +0000 UTC" firstStartedPulling="2026-01-27 14:49:00.752133602 +0000 UTC m=+1196.428911067" lastFinishedPulling="2026-01-27 14:49:13.042492166 +0000 UTC m=+1208.719269631" observedRunningTime="2026-01-27 14:49:14.850152901 +0000 UTC m=+1210.526930366" watchObservedRunningTime="2026-01-27 14:49:14.884888476 +0000 UTC m=+1210.561665941" Jan 27 14:49:15 crc kubenswrapper[4698]: I0127 14:49:15.829254 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35c385ce-4a6e-4a66-b607-89f47e40b6fc","Type":"ContainerStarted","Data":"161e303048668b0916526bae83ce045239d1b2990a0f029593f10c71d93b223c"} Jan 27 14:49:15 crc kubenswrapper[4698]: I0127 14:49:15.832743 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca935cab-9c0b-4b5c-9754-f5bafb3a0037" containerID="1a3f23602c31ef313b4c4a3025524920f5608961b5ce99bb0c358b8f01d5cbbf" exitCode=0 Jan 27 14:49:15 crc kubenswrapper[4698]: I0127 14:49:15.832809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cn5z6" event={"ID":"ca935cab-9c0b-4b5c-9754-f5bafb3a0037","Type":"ContainerDied","Data":"1a3f23602c31ef313b4c4a3025524920f5608961b5ce99bb0c358b8f01d5cbbf"} Jan 27 14:49:16 crc kubenswrapper[4698]: I0127 14:49:16.845871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerStarted","Data":"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58"} Jan 27 14:49:16 crc kubenswrapper[4698]: I0127 14:49:16.847702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d70f810-b592-4abf-b587-4ff75b743944","Type":"ContainerStarted","Data":"0726419db0f961bba21296320695af51ff8e1cbd4aa57ac86253c88afbcf1b9f"} Jan 27 14:49:16 crc kubenswrapper[4698]: I0127 14:49:16.848275 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:49:16 crc kubenswrapper[4698]: I0127 14:49:16.851162 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5d6f607c-3a31-4135-9eb4-3193e722d112","Type":"ContainerStarted","Data":"8b10145eea493ec749905b9ddc64f9a97a043a0f6550bbe1b6c6bdd5fd7bfd58"} Jan 27 14:49:16 crc kubenswrapper[4698]: I0127 14:49:16.913191 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.882086803 podStartE2EDuration="35.913171759s" podCreationTimestamp="2026-01-27 14:48:41 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.239415184 +0000 UTC m=+1196.916192659" lastFinishedPulling="2026-01-27 14:49:15.27050015 +0000 UTC m=+1210.947277615" observedRunningTime="2026-01-27 14:49:16.911995798 +0000 UTC m=+1212.588773263" watchObservedRunningTime="2026-01-27 14:49:16.913171759 +0000 UTC m=+1212.589949214" Jan 27 14:49:17 crc kubenswrapper[4698]: I0127 14:49:17.860706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c686e168-f607-4b7f-a81d-f33ac8bdf513","Type":"ContainerStarted","Data":"bddaebda662a6d871fff02dfad71a498259b13e0f29d6a737909aa266958ebc4"} Jan 27 14:49:17 crc kubenswrapper[4698]: I0127 14:49:17.864439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cn5z6" event={"ID":"ca935cab-9c0b-4b5c-9754-f5bafb3a0037","Type":"ContainerStarted","Data":"f28899f8cc9d7588c70a6920dc8cbfd7d3bd156665ca9eebf5134bc79ffafa4d"} Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.878851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35c385ce-4a6e-4a66-b607-89f47e40b6fc","Type":"ContainerStarted","Data":"a35663e3f03c60aa62dc07d70c3ac0c7df7ee3abdbcf1053be97373e01b67ff5"} Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.881244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cn5z6" event={"ID":"ca935cab-9c0b-4b5c-9754-f5bafb3a0037","Type":"ContainerStarted","Data":"a80cb80195bac9711b22836f493c2478e0d52849c41508448ff65774cbc9ac12"} Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.882661 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.883824 4698 generic.go:334] "Generic (PLEG): container finished" podID="a30e344d-b5c4-40f6-8bdb-7af9c1df7449" containerID="d8b91a61006f4e6e0104a26d88356fcce75d71b7fb64aa0a5caef7fc0f355acb" exitCode=0 Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.883896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a30e344d-b5c4-40f6-8bdb-7af9c1df7449","Type":"ContainerDied","Data":"d8b91a61006f4e6e0104a26d88356fcce75d71b7fb64aa0a5caef7fc0f355acb"} Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.885906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74651d0e-02c7-4067-9fc1-eff4c90d33ac","Type":"ContainerStarted","Data":"7d8978c0036c051a422f712789c313f580e6da92c03126671be313a0d4066a52"} Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.904598 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.804534082 podStartE2EDuration="31.904578258s" podCreationTimestamp="2026-01-27 14:48:48 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.831950279 +0000 UTC m=+1197.508727744" lastFinishedPulling="2026-01-27 14:49:18.931994455 +0000 UTC m=+1214.608771920" observedRunningTime="2026-01-27 14:49:19.899086053 +0000 UTC m=+1215.575863548" watchObservedRunningTime="2026-01-27 14:49:19.904578258 +0000 UTC m=+1215.581355723" Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.921937 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.891975903 podStartE2EDuration="31.921919614s" podCreationTimestamp="2026-01-27 14:48:48 +0000 UTC" firstStartedPulling="2026-01-27 14:49:03.911590725 +0000 UTC m=+1199.588368200" lastFinishedPulling="2026-01-27 14:49:18.941534446 +0000 UTC m=+1214.618311911" observedRunningTime="2026-01-27 14:49:19.918575507 +0000 UTC m=+1215.595352992" watchObservedRunningTime="2026-01-27 14:49:19.921919614 +0000 UTC m=+1215.598697079" Jan 27 14:49:19 crc kubenswrapper[4698]: I0127 14:49:19.966120 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cn5z6" podStartSLOduration=25.929303515 podStartE2EDuration="34.966103608s" podCreationTimestamp="2026-01-27 14:48:45 +0000 UTC" firstStartedPulling="2026-01-27 14:49:03.9102398 +0000 UTC m=+1199.587017265" lastFinishedPulling="2026-01-27 14:49:12.947039893 +0000 UTC m=+1208.623817358" observedRunningTime="2026-01-27 14:49:19.959569726 +0000 UTC m=+1215.636347211" watchObservedRunningTime="2026-01-27 14:49:19.966103608 +0000 UTC m=+1215.642881073" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.260675 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.261014 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.297091 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.543200 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.799791 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.894448 4698 generic.go:334] "Generic (PLEG): container finished" podID="9374a29e-348e-43ec-9321-b0a13aeb6c4b" containerID="f6f7825cc551aa6513e87931bd6d22a1c931a2fd46b3d582eea5e25160a65401" exitCode=0 Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.895435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9374a29e-348e-43ec-9321-b0a13aeb6c4b","Type":"ContainerDied","Data":"f6f7825cc551aa6513e87931bd6d22a1c931a2fd46b3d582eea5e25160a65401"} Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.896443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a30e344d-b5c4-40f6-8bdb-7af9c1df7449","Type":"ContainerStarted","Data":"cf841eaa249eeff1932d4cbde8065f37a05328608db74efd61f35f5f66ba1c95"} Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.896957 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.945851 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 14:49:20 crc kubenswrapper[4698]: I0127 14:49:20.953686 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.267602186 podStartE2EDuration="42.953667525s" podCreationTimestamp="2026-01-27 14:48:38 +0000 UTC" firstStartedPulling="2026-01-27 14:49:00.697939475 +0000 UTC m=+1196.374716930" lastFinishedPulling="2026-01-27 14:49:12.384004794 +0000 UTC m=+1208.060782269" observedRunningTime="2026-01-27 14:49:20.945837859 +0000 UTC m=+1216.622615344" watchObservedRunningTime="2026-01-27 14:49:20.953667525 +0000 UTC m=+1216.630445010" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.199245 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.199457 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6699786569-xgz55" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="dnsmasq-dns" containerID="cri-o://3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012" gracePeriod=10 Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.200823 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.249027 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:21 crc kubenswrapper[4698]: E0127 14:49:21.249436 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2def0e4c-a384-453a-b67c-28004f50c9c6" containerName="init" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.249461 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2def0e4c-a384-453a-b67c-28004f50c9c6" containerName="init" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.249677 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2def0e4c-a384-453a-b67c-28004f50c9c6" containerName="init" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.250781 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.255916 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6zb85"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.256689 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.256955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.259487 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.272492 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.279815 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6zb85"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.391562 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487e1754-14d1-494d-97fd-495520f0c8e0-config\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.391972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovn-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392016 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392041 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392066 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmmc\" (UniqueName: \"kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392152 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovs-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-combined-ca-bundle\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392258 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zw5p\" (UniqueName: \"kubernetes.io/projected/487e1754-14d1-494d-97fd-495520f0c8e0-kube-api-access-6zw5p\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.392335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.494995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovs-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-combined-ca-bundle\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495149 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zw5p\" (UniqueName: \"kubernetes.io/projected/487e1754-14d1-494d-97fd-495520f0c8e0-kube-api-access-6zw5p\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487e1754-14d1-494d-97fd-495520f0c8e0-config\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovn-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495793 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.495822 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmmc\" (UniqueName: \"kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.496135 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovn-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.496280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.496600 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.496818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.497034 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/487e1754-14d1-494d-97fd-495520f0c8e0-ovs-rundir\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.498619 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487e1754-14d1-494d-97fd-495520f0c8e0-config\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.500896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-combined-ca-bundle\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.501985 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/487e1754-14d1-494d-97fd-495520f0c8e0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.518332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmmc\" (UniqueName: \"kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc\") pod \"dnsmasq-dns-57bbd57f7-vpqbz\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.521404 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zw5p\" (UniqueName: \"kubernetes.io/projected/487e1754-14d1-494d-97fd-495520f0c8e0-kube-api-access-6zw5p\") pod \"ovn-controller-metrics-6zb85\" (UID: \"487e1754-14d1-494d-97fd-495520f0c8e0\") " pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.572335 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.581352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6zb85" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.615056 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.679464 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.704598 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc\") pod \"4e94ae22-f128-4a13-8c1e-7beadbfee471\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.704756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlvzz\" (UniqueName: \"kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz\") pod \"4e94ae22-f128-4a13-8c1e-7beadbfee471\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.704848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config\") pod \"4e94ae22-f128-4a13-8c1e-7beadbfee471\" (UID: \"4e94ae22-f128-4a13-8c1e-7beadbfee471\") " Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.715276 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:21 crc kubenswrapper[4698]: E0127 14:49:21.715728 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="init" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.715750 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="init" Jan 27 14:49:21 crc kubenswrapper[4698]: E0127 14:49:21.715780 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="dnsmasq-dns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.715788 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="dnsmasq-dns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.716001 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="dnsmasq-dns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.717111 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.723000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz" (OuterVolumeSpecName: "kube-api-access-wlvzz") pod "4e94ae22-f128-4a13-8c1e-7beadbfee471" (UID: "4e94ae22-f128-4a13-8c1e-7beadbfee471"). InnerVolumeSpecName "kube-api-access-wlvzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.732707 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.744158 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.760393 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e94ae22-f128-4a13-8c1e-7beadbfee471" (UID: "4e94ae22-f128-4a13-8c1e-7beadbfee471"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.791381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config" (OuterVolumeSpecName: "config") pod "4e94ae22-f128-4a13-8c1e-7beadbfee471" (UID: "4e94ae22-f128-4a13-8c1e-7beadbfee471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.806747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdwm\" (UniqueName: \"kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.806875 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.806998 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.807112 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.807228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.807295 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.807306 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlvzz\" (UniqueName: \"kubernetes.io/projected/4e94ae22-f128-4a13-8c1e-7beadbfee471-kube-api-access-wlvzz\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.807317 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e94ae22-f128-4a13-8c1e-7beadbfee471-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.908331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.908402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.908487 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdwm\" (UniqueName: \"kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.908525 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.908610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.909585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911117 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerID="3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012" exitCode=0 Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911148 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6699786569-xgz55" event={"ID":"4e94ae22-f128-4a13-8c1e-7beadbfee471","Type":"ContainerDied","Data":"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012"} Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6699786569-xgz55" event={"ID":"4e94ae22-f128-4a13-8c1e-7beadbfee471","Type":"ContainerDied","Data":"cbe4988a8a633a3ebb6e5713339b5895abb1bc91a7870e11ea89c5344ac44633"} Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911236 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6699786569-xgz55" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911251 4698 scope.go:117] "RemoveContainer" containerID="3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.911437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.923293 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9374a29e-348e-43ec-9321-b0a13aeb6c4b","Type":"ContainerStarted","Data":"b9141a68c203507498bdcf24d168acc5ad4819ad13e134feac19598da1253a5c"} Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.942416 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdwm\" (UniqueName: \"kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm\") pod \"dnsmasq-dns-7776c98d8c-nj7ns\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.961229 4698 scope.go:117] "RemoveContainer" containerID="a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.962718 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=32.626683301 podStartE2EDuration="44.962694538s" podCreationTimestamp="2026-01-27 14:48:37 +0000 UTC" firstStartedPulling="2026-01-27 14:49:00.700708897 +0000 UTC m=+1196.377486362" lastFinishedPulling="2026-01-27 14:49:13.036720134 +0000 UTC m=+1208.713497599" observedRunningTime="2026-01-27 14:49:21.956016432 +0000 UTC m=+1217.632793897" watchObservedRunningTime="2026-01-27 14:49:21.962694538 +0000 UTC m=+1217.639472013" Jan 27 14:49:21 crc kubenswrapper[4698]: I0127 14:49:21.996125 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.005719 4698 scope.go:117] "RemoveContainer" containerID="3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012" Jan 27 14:49:22 crc kubenswrapper[4698]: E0127 14:49:22.006848 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012\": container with ID starting with 3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012 not found: ID does not exist" containerID="3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.006888 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012"} err="failed to get container status \"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012\": rpc error: code = NotFound desc = could not find container \"3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012\": container with ID starting with 3ab2c80ad85298c4781744be8fd8125bb8ada5d129c9265859def1e4df485012 not found: ID does not exist" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.006912 4698 scope.go:117] "RemoveContainer" containerID="a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.007231 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6699786569-xgz55"] Jan 27 14:49:22 crc kubenswrapper[4698]: E0127 14:49:22.007250 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362\": container with ID starting with a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362 not found: ID does not exist" containerID="a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.007273 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362"} err="failed to get container status \"a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362\": rpc error: code = NotFound desc = could not find container \"a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362\": container with ID starting with a9c710e0bdd3e48c28abbc0a0fbf4eb7315f0ce4f4f756e9aeb3a9ba61e6e362 not found: ID does not exist" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.056415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.132593 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6zb85"] Jan 27 14:49:22 crc kubenswrapper[4698]: W0127 14:49:22.147667 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod487e1754_14d1_494d_97fd_495520f0c8e0.slice/crio-0d5abe429d69551949b634c4c4db639c2aa77de920363415856d01f740d5f1ad WatchSource:0}: Error finding container 0d5abe429d69551949b634c4c4db639c2aa77de920363415856d01f740d5f1ad: Status 404 returned error can't find the container with id 0d5abe429d69551949b634c4c4db639c2aa77de920363415856d01f740d5f1ad Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.236315 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.358849 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.470862 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.532993 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.534564 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.552878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.576734 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:22 crc kubenswrapper[4698]: W0127 14:49:22.591461 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7235262_0f2b_4b25_86de_131f5f56431b.slice/crio-9dfe17607b8e7a4b73ab8ac86e492e4ec03b2d84ee76a19c374f4ad9c23b939f WatchSource:0}: Error finding container 9dfe17607b8e7a4b73ab8ac86e492e4ec03b2d84ee76a19c374f4ad9c23b939f: Status 404 returned error can't find the container with id 9dfe17607b8e7a4b73ab8ac86e492e4ec03b2d84ee76a19c374f4ad9c23b939f Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.631864 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhln\" (UniqueName: \"kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.631921 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.631944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.632007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.632031 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.734878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhln\" (UniqueName: \"kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.734943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.734969 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.735070 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.735117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.736143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.736374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.737378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.741780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.755341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhln\" (UniqueName: \"kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln\") pod \"dnsmasq-dns-54f87864f5-svgk4\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.780223 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.825856 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.932745 4698 generic.go:334] "Generic (PLEG): container finished" podID="a7235262-0f2b-4b25-86de-131f5f56431b" containerID="44424409700d2c217377ccc64f46f683417a69fbd01f968500d4f6be85e1d2bd" exitCode=0 Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.932829 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" event={"ID":"a7235262-0f2b-4b25-86de-131f5f56431b","Type":"ContainerDied","Data":"44424409700d2c217377ccc64f46f683417a69fbd01f968500d4f6be85e1d2bd"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.932885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" event={"ID":"a7235262-0f2b-4b25-86de-131f5f56431b","Type":"ContainerStarted","Data":"9dfe17607b8e7a4b73ab8ac86e492e4ec03b2d84ee76a19c374f4ad9c23b939f"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.934626 4698 generic.go:334] "Generic (PLEG): container finished" podID="0dd0a57c-30a4-4dd8-8e30-456f2316bd22" containerID="9fb4f85aeb7bf89fec9b56b2f12be28c615adca0a24118867420fa3ba48a3859" exitCode=0 Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.934701 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" event={"ID":"0dd0a57c-30a4-4dd8-8e30-456f2316bd22","Type":"ContainerDied","Data":"9fb4f85aeb7bf89fec9b56b2f12be28c615adca0a24118867420fa3ba48a3859"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.934723 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" event={"ID":"0dd0a57c-30a4-4dd8-8e30-456f2316bd22","Type":"ContainerStarted","Data":"3b067e41bdc9ea028d9f6d29d000b87c5bdcc9ba9d7b3092f8d7c72a6f44135d"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.936297 4698 generic.go:334] "Generic (PLEG): container finished" podID="03d06c18-e82f-417e-b3bd-6365030bee53" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" exitCode=0 Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.936334 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerDied","Data":"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.938178 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6zb85" event={"ID":"487e1754-14d1-494d-97fd-495520f0c8e0","Type":"ContainerStarted","Data":"2589e7340da815d8cbd430cce3a0cb16a936ea35180f18798fd4b088c89eb6b4"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.938232 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.938248 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6zb85" event={"ID":"487e1754-14d1-494d-97fd-495520f0c8e0","Type":"ContainerStarted","Data":"0d5abe429d69551949b634c4c4db639c2aa77de920363415856d01f740d5f1ad"} Jan 27 14:49:22 crc kubenswrapper[4698]: I0127 14:49:22.945272 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.012169 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" path="/var/lib/kubelet/pods/4e94ae22-f128-4a13-8c1e-7beadbfee471/volumes" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.012918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.027859 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6zb85" podStartSLOduration=2.027844518 podStartE2EDuration="2.027844518s" podCreationTimestamp="2026-01-27 14:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:23.026141073 +0000 UTC m=+1218.702918538" watchObservedRunningTime="2026-01-27 14:49:23.027844518 +0000 UTC m=+1218.704621983" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.418817 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.420676 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.428095 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.428394 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.433038 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.433242 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kbj6m" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.445016 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.555999 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556073 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzshp\" (UniqueName: \"kubernetes.io/projected/1fbe86d1-6225-4de5-81a1-9222e08bcec5-kube-api-access-dzshp\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556117 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-scripts\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.556233 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-config\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.597422 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.631035 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:49:23 crc kubenswrapper[4698]: E0127 14:49:23.631442 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7235262-0f2b-4b25-86de-131f5f56431b" containerName="init" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.631465 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7235262-0f2b-4b25-86de-131f5f56431b" containerName="init" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.631699 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7235262-0f2b-4b25-86de-131f5f56431b" containerName="init" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.638976 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.639156 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657731 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657785 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzshp\" (UniqueName: \"kubernetes.io/projected/1fbe86d1-6225-4de5-81a1-9222e08bcec5-kube-api-access-dzshp\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657874 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657907 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-scripts\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.657999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-config\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.658763 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.658938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-config\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.659085 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2dthd" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.659223 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.659248 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.659362 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.662536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fbe86d1-6225-4de5-81a1-9222e08bcec5-scripts\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.667452 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.685334 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.686041 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbe86d1-6225-4de5-81a1-9222e08bcec5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.694860 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzshp\" (UniqueName: \"kubernetes.io/projected/1fbe86d1-6225-4de5-81a1-9222e08bcec5-kube-api-access-dzshp\") pod \"ovn-northd-0\" (UID: \"1fbe86d1-6225-4de5-81a1-9222e08bcec5\") " pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.757391 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.759597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzdwm\" (UniqueName: \"kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm\") pod \"a7235262-0f2b-4b25-86de-131f5f56431b\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.759795 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb\") pod \"a7235262-0f2b-4b25-86de-131f5f56431b\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.759894 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config\") pod \"a7235262-0f2b-4b25-86de-131f5f56431b\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.759950 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb\") pod \"a7235262-0f2b-4b25-86de-131f5f56431b\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.759967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc\") pod \"a7235262-0f2b-4b25-86de-131f5f56431b\" (UID: \"a7235262-0f2b-4b25-86de-131f5f56431b\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.760568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-cache\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.760668 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-lock\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.760694 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwpj\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-kube-api-access-nfwpj\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.760785 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.760842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15487de-4580-4abf-a96c-3c5d364fe2d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.761720 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.765314 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm" (OuterVolumeSpecName: "kube-api-access-bzdwm") pod "a7235262-0f2b-4b25-86de-131f5f56431b" (UID: "a7235262-0f2b-4b25-86de-131f5f56431b"). InnerVolumeSpecName "kube-api-access-bzdwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.783120 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7235262-0f2b-4b25-86de-131f5f56431b" (UID: "a7235262-0f2b-4b25-86de-131f5f56431b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.786178 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7235262-0f2b-4b25-86de-131f5f56431b" (UID: "a7235262-0f2b-4b25-86de-131f5f56431b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.797388 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config" (OuterVolumeSpecName: "config") pod "a7235262-0f2b-4b25-86de-131f5f56431b" (UID: "a7235262-0f2b-4b25-86de-131f5f56431b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.798038 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7235262-0f2b-4b25-86de-131f5f56431b" (UID: "a7235262-0f2b-4b25-86de-131f5f56431b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.863802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmmc\" (UniqueName: \"kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc\") pod \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.863892 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb\") pod \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864058 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc\") pod \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config\") pod \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\" (UID: \"0dd0a57c-30a4-4dd8-8e30-456f2316bd22\") " Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-lock\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwpj\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-kube-api-access-nfwpj\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864577 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15487de-4580-4abf-a96c-3c5d364fe2d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864659 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-cache\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864855 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864870 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864882 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864895 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7235262-0f2b-4b25-86de-131f5f56431b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.864906 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzdwm\" (UniqueName: \"kubernetes.io/projected/a7235262-0f2b-4b25-86de-131f5f56431b-kube-api-access-bzdwm\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.865026 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-lock\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.865397 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: E0127 14:49:23.866088 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:23 crc kubenswrapper[4698]: E0127 14:49:23.866109 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:23 crc kubenswrapper[4698]: E0127 14:49:23.866157 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:24.366136284 +0000 UTC m=+1220.042913819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.866424 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f15487de-4580-4abf-a96c-3c5d364fe2d5-cache\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.868998 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc" (OuterVolumeSpecName: "kube-api-access-fpmmc") pod "0dd0a57c-30a4-4dd8-8e30-456f2316bd22" (UID: "0dd0a57c-30a4-4dd8-8e30-456f2316bd22"). InnerVolumeSpecName "kube-api-access-fpmmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.870723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15487de-4580-4abf-a96c-3c5d364fe2d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.886124 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0dd0a57c-30a4-4dd8-8e30-456f2316bd22" (UID: "0dd0a57c-30a4-4dd8-8e30-456f2316bd22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.887421 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwpj\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-kube-api-access-nfwpj\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.891143 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.894305 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.894604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0dd0a57c-30a4-4dd8-8e30-456f2316bd22" (UID: "0dd0a57c-30a4-4dd8-8e30-456f2316bd22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.902181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config" (OuterVolumeSpecName: "config") pod "0dd0a57c-30a4-4dd8-8e30-456f2316bd22" (UID: "0dd0a57c-30a4-4dd8-8e30-456f2316bd22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.948266 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" event={"ID":"0dd0a57c-30a4-4dd8-8e30-456f2316bd22","Type":"ContainerDied","Data":"3b067e41bdc9ea028d9f6d29d000b87c5bdcc9ba9d7b3092f8d7c72a6f44135d"} Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.948326 4698 scope.go:117] "RemoveContainer" containerID="9fb4f85aeb7bf89fec9b56b2f12be28c615adca0a24118867420fa3ba48a3859" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.948287 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bbd57f7-vpqbz" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.950232 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" event={"ID":"a7235262-0f2b-4b25-86de-131f5f56431b","Type":"ContainerDied","Data":"9dfe17607b8e7a4b73ab8ac86e492e4ec03b2d84ee76a19c374f4ad9c23b939f"} Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.950432 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776c98d8c-nj7ns" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.966105 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmmc\" (UniqueName: \"kubernetes.io/projected/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-kube-api-access-fpmmc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.966138 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.966147 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:23 crc kubenswrapper[4698]: I0127 14:49:23.966157 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd0a57c-30a4-4dd8-8e30-456f2316bd22-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.017017 4698 scope.go:117] "RemoveContainer" containerID="44424409700d2c217377ccc64f46f683417a69fbd01f968500d4f6be85e1d2bd" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.107394 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.117208 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.137443 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bbd57f7-vpqbz"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.191257 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.204767 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7776c98d8c-nj7ns"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.221713 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-52ttj"] Jan 27 14:49:24 crc kubenswrapper[4698]: E0127 14:49:24.222200 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd0a57c-30a4-4dd8-8e30-456f2316bd22" containerName="init" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.222218 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd0a57c-30a4-4dd8-8e30-456f2316bd22" containerName="init" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.222434 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd0a57c-30a4-4dd8-8e30-456f2316bd22" containerName="init" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.246026 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-52ttj"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.246163 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.250001 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.250104 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.252816 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383354 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383780 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849lx\" (UniqueName: \"kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383858 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383911 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.383981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: E0127 14:49:24.383689 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:24 crc kubenswrapper[4698]: E0127 14:49:24.384114 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:24 crc kubenswrapper[4698]: E0127 14:49:24.384166 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:25.384145926 +0000 UTC m=+1221.060923391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.485868 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849lx\" (UniqueName: \"kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.485932 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.486001 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.486034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.486071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.486980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.487010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.487127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.487708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.487932 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.492261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.492510 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.492850 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.499713 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.507500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849lx\" (UniqueName: \"kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx\") pod \"swift-ring-rebalance-52ttj\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.579439 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.969678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1fbe86d1-6225-4de5-81a1-9222e08bcec5","Type":"ContainerStarted","Data":"bd304ed6cc925f8494cb978dfe3327cc0ea521cd7abee805f5fd54352d29893a"} Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.973728 4698 generic.go:334] "Generic (PLEG): container finished" podID="711fbf65-f112-4da8-8475-534064efe051" containerID="d638d8c72115155dcf1623d47e5baa258ebf8ca6c9c9cf4c0f14931d80eec58e" exitCode=0 Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.973831 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" event={"ID":"711fbf65-f112-4da8-8475-534064efe051","Type":"ContainerDied","Data":"d638d8c72115155dcf1623d47e5baa258ebf8ca6c9c9cf4c0f14931d80eec58e"} Jan 27 14:49:24 crc kubenswrapper[4698]: I0127 14:49:24.973877 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" event={"ID":"711fbf65-f112-4da8-8475-534064efe051","Type":"ContainerStarted","Data":"b3d94900930c077d1d4e8b4241dc9ec66f7fe1d63093f4752c0400af89e49ea7"} Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.017072 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0a57c-30a4-4dd8-8e30-456f2316bd22" path="/var/lib/kubelet/pods/0dd0a57c-30a4-4dd8-8e30-456f2316bd22/volumes" Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.018499 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7235262-0f2b-4b25-86de-131f5f56431b" path="/var/lib/kubelet/pods/a7235262-0f2b-4b25-86de-131f5f56431b/volumes" Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.103621 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-52ttj"] Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.403472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:25 crc kubenswrapper[4698]: E0127 14:49:25.403667 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:25 crc kubenswrapper[4698]: E0127 14:49:25.403692 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:25 crc kubenswrapper[4698]: E0127 14:49:25.403755 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:27.403736727 +0000 UTC m=+1223.080514202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.988291 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-52ttj" event={"ID":"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e","Type":"ContainerStarted","Data":"5c01e2ae8e3600e61e7c043602778616645fe8bc104c6708bb0a26cb2855dfc1"} Jan 27 14:49:25 crc kubenswrapper[4698]: I0127 14:49:25.991410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" event={"ID":"711fbf65-f112-4da8-8475-534064efe051","Type":"ContainerStarted","Data":"63ca572b4b74f0f96d4538afa46128810ee1b526e454d7a6bd7f0a30c91927bd"} Jan 27 14:49:26 crc kubenswrapper[4698]: I0127 14:49:26.433686 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6699786569-xgz55" podUID="4e94ae22-f128-4a13-8c1e-7beadbfee471" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.104:5353: i/o timeout" Jan 27 14:49:27 crc kubenswrapper[4698]: I0127 14:49:27.002955 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:27 crc kubenswrapper[4698]: I0127 14:49:27.445346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:27 crc kubenswrapper[4698]: E0127 14:49:27.445613 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:27 crc kubenswrapper[4698]: E0127 14:49:27.445680 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:27 crc kubenswrapper[4698]: E0127 14:49:27.445748 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:31.445725633 +0000 UTC m=+1227.122503098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:28 crc kubenswrapper[4698]: E0127 14:49:28.430507 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.212:45644->38.102.83.212:38847: write tcp 38.102.83.212:45644->38.102.83.212:38847: write: connection reset by peer Jan 27 14:49:29 crc kubenswrapper[4698]: I0127 14:49:29.017749 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 14:49:29 crc kubenswrapper[4698]: I0127 14:49:29.017791 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 14:49:30 crc kubenswrapper[4698]: I0127 14:49:30.241755 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 14:49:30 crc kubenswrapper[4698]: I0127 14:49:30.242120 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 14:49:30 crc kubenswrapper[4698]: I0127 14:49:30.357375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 14:49:30 crc kubenswrapper[4698]: I0127 14:49:30.376696 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" podStartSLOduration=8.376681558 podStartE2EDuration="8.376681558s" podCreationTimestamp="2026-01-27 14:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:26.023748905 +0000 UTC m=+1221.700526370" watchObservedRunningTime="2026-01-27 14:49:30.376681558 +0000 UTC m=+1226.053459023" Jan 27 14:49:31 crc kubenswrapper[4698]: I0127 14:49:31.162451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 14:49:31 crc kubenswrapper[4698]: I0127 14:49:31.381257 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 14:49:31 crc kubenswrapper[4698]: I0127 14:49:31.507889 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 14:49:31 crc kubenswrapper[4698]: I0127 14:49:31.516360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:31 crc kubenswrapper[4698]: E0127 14:49:31.516570 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:31 crc kubenswrapper[4698]: E0127 14:49:31.516604 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:31 crc kubenswrapper[4698]: E0127 14:49:31.516669 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:39.516650549 +0000 UTC m=+1235.193428004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.620685 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-qk9bt"] Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.621962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.647576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grjl\" (UniqueName: \"kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.647628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.652561 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-c51b-account-create-update-ddjfr"] Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.653929 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.672273 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.702360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-c51b-account-create-update-ddjfr"] Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.739714 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-qk9bt"] Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.750011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.750154 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grjl\" (UniqueName: \"kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.750203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.750296 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2cb4\" (UniqueName: \"kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.751495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.782818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grjl\" (UniqueName: \"kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl\") pod \"watcher-db-create-qk9bt\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.851757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2cb4\" (UniqueName: \"kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.852001 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.852987 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.869374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2cb4\" (UniqueName: \"kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4\") pod \"watcher-c51b-account-create-update-ddjfr\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.946891 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:49:32 crc kubenswrapper[4698]: I0127 14:49:32.974440 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.001316 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.001606 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="dnsmasq-dns" containerID="cri-o://a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018" gracePeriod=10 Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.006843 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.066450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-52ttj" event={"ID":"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e","Type":"ContainerStarted","Data":"3642ed493f2d430111ae937632c6a80c6c88eaa5a2b01f48c47ac7ee49d3f248"} Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.079487 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1fbe86d1-6225-4de5-81a1-9222e08bcec5","Type":"ContainerStarted","Data":"be89c95c52400b8953a1deb9c69afc1c8449d30498a6eb4b751224cc98025dc9"} Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.079529 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1fbe86d1-6225-4de5-81a1-9222e08bcec5","Type":"ContainerStarted","Data":"39e69a0c690f0849cf60e667bdf8d1fb00fb32c04266a7802058188e5ef8fdea"} Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.080207 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.093492 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-52ttj" podStartSLOduration=2.05240624 podStartE2EDuration="9.093473314s" podCreationTimestamp="2026-01-27 14:49:24 +0000 UTC" firstStartedPulling="2026-01-27 14:49:25.122815869 +0000 UTC m=+1220.799593334" lastFinishedPulling="2026-01-27 14:49:32.163882933 +0000 UTC m=+1227.840660408" observedRunningTime="2026-01-27 14:49:33.088781251 +0000 UTC m=+1228.765558726" watchObservedRunningTime="2026-01-27 14:49:33.093473314 +0000 UTC m=+1228.770250779" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.096486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerStarted","Data":"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281"} Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.115553 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.467004892 podStartE2EDuration="10.115531585s" podCreationTimestamp="2026-01-27 14:49:23 +0000 UTC" firstStartedPulling="2026-01-27 14:49:24.511713245 +0000 UTC m=+1220.188490710" lastFinishedPulling="2026-01-27 14:49:32.160239938 +0000 UTC m=+1227.837017403" observedRunningTime="2026-01-27 14:49:33.115115445 +0000 UTC m=+1228.791892910" watchObservedRunningTime="2026-01-27 14:49:33.115531585 +0000 UTC m=+1228.792309050" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.551618 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.671279 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config\") pod \"64738940-4d7e-484c-92f9-d6a686fd2696\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.671753 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twptp\" (UniqueName: \"kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp\") pod \"64738940-4d7e-484c-92f9-d6a686fd2696\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.672521 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-c51b-account-create-update-ddjfr"] Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.672704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc\") pod \"64738940-4d7e-484c-92f9-d6a686fd2696\" (UID: \"64738940-4d7e-484c-92f9-d6a686fd2696\") " Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.686948 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp" (OuterVolumeSpecName: "kube-api-access-twptp") pod "64738940-4d7e-484c-92f9-d6a686fd2696" (UID: "64738940-4d7e-484c-92f9-d6a686fd2696"). InnerVolumeSpecName "kube-api-access-twptp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.703169 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-qk9bt"] Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.719909 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64738940-4d7e-484c-92f9-d6a686fd2696" (UID: "64738940-4d7e-484c-92f9-d6a686fd2696"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.724167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config" (OuterVolumeSpecName: "config") pod "64738940-4d7e-484c-92f9-d6a686fd2696" (UID: "64738940-4d7e-484c-92f9-d6a686fd2696"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.776502 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.776542 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64738940-4d7e-484c-92f9-d6a686fd2696-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:33 crc kubenswrapper[4698]: I0127 14:49:33.776555 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twptp\" (UniqueName: \"kubernetes.io/projected/64738940-4d7e-484c-92f9-d6a686fd2696-kube-api-access-twptp\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.120900 4698 generic.go:334] "Generic (PLEG): container finished" podID="64738940-4d7e-484c-92f9-d6a686fd2696" containerID="a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018" exitCode=0 Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.120957 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" event={"ID":"64738940-4d7e-484c-92f9-d6a686fd2696","Type":"ContainerDied","Data":"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.120984 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" event={"ID":"64738940-4d7e-484c-92f9-d6a686fd2696","Type":"ContainerDied","Data":"3ba2a729f9ff6413edcd8122d546ff6989de69f82fe23deff6ccdec58dfe5e88"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.121001 4698 scope.go:117] "RemoveContainer" containerID="a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.121107 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64886f97d9-f2p4s" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.134078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c51b-account-create-update-ddjfr" event={"ID":"2673531b-aee1-4a69-b3bb-255c3e331724","Type":"ContainerStarted","Data":"3a9f2419b329d1001425ea0a9edca3b704abdd02718cf6adaacbbf7e45a43e28"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.134119 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c51b-account-create-update-ddjfr" event={"ID":"2673531b-aee1-4a69-b3bb-255c3e331724","Type":"ContainerStarted","Data":"8454cf4d1bf7706460cf6179b1c4d6c1d6260b6b8c2f11e148dbb777e27ad531"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.138284 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-qk9bt" event={"ID":"f4433013-b1ca-47c7-9b70-155cb05605a3","Type":"ContainerStarted","Data":"1f0e8ef153b5b161a368fa003258538bce40cc2b7013bb9505aeeffcb9ed9b41"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.138336 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-qk9bt" event={"ID":"f4433013-b1ca-47c7-9b70-155cb05605a3","Type":"ContainerStarted","Data":"77feaffbb2d0ccb65c468150d9f6f8058c615f496f3d38cd9d7f68d4808e39e1"} Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.169736 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-c51b-account-create-update-ddjfr" podStartSLOduration=2.169716497 podStartE2EDuration="2.169716497s" podCreationTimestamp="2026-01-27 14:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:34.162899488 +0000 UTC m=+1229.839676963" watchObservedRunningTime="2026-01-27 14:49:34.169716497 +0000 UTC m=+1229.846493962" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.360549 4698 scope.go:117] "RemoveContainer" containerID="a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.371015 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.378016 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64886f97d9-f2p4s"] Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.624334 4698 scope.go:117] "RemoveContainer" containerID="a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018" Jan 27 14:49:34 crc kubenswrapper[4698]: E0127 14:49:34.624861 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018\": container with ID starting with a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018 not found: ID does not exist" containerID="a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.624897 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018"} err="failed to get container status \"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018\": rpc error: code = NotFound desc = could not find container \"a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018\": container with ID starting with a409253d7a8678bb92600c42e76bfb9793d40a0e39b84add598025ec292bf018 not found: ID does not exist" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.624918 4698 scope.go:117] "RemoveContainer" containerID="a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884" Jan 27 14:49:34 crc kubenswrapper[4698]: E0127 14:49:34.625219 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884\": container with ID starting with a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884 not found: ID does not exist" containerID="a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884" Jan 27 14:49:34 crc kubenswrapper[4698]: I0127 14:49:34.625245 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884"} err="failed to get container status \"a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884\": rpc error: code = NotFound desc = could not find container \"a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884\": container with ID starting with a4a5d9281454ae8ae15e0feaac19012c33d46a4ea020d7409d2224527c3ca884 not found: ID does not exist" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.005127 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" path="/var/lib/kubelet/pods/64738940-4d7e-484c-92f9-d6a686fd2696/volumes" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.150128 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerStarted","Data":"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533"} Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.152494 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4433013-b1ca-47c7-9b70-155cb05605a3" containerID="1f0e8ef153b5b161a368fa003258538bce40cc2b7013bb9505aeeffcb9ed9b41" exitCode=0 Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.152561 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-qk9bt" event={"ID":"f4433013-b1ca-47c7-9b70-155cb05605a3","Type":"ContainerDied","Data":"1f0e8ef153b5b161a368fa003258538bce40cc2b7013bb9505aeeffcb9ed9b41"} Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.154721 4698 generic.go:334] "Generic (PLEG): container finished" podID="2673531b-aee1-4a69-b3bb-255c3e331724" containerID="3a9f2419b329d1001425ea0a9edca3b704abdd02718cf6adaacbbf7e45a43e28" exitCode=0 Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.154799 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c51b-account-create-update-ddjfr" event={"ID":"2673531b-aee1-4a69-b3bb-255c3e331724","Type":"ContainerDied","Data":"3a9f2419b329d1001425ea0a9edca3b704abdd02718cf6adaacbbf7e45a43e28"} Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.512334 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.613181 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts\") pod \"f4433013-b1ca-47c7-9b70-155cb05605a3\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.613431 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9grjl\" (UniqueName: \"kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl\") pod \"f4433013-b1ca-47c7-9b70-155cb05605a3\" (UID: \"f4433013-b1ca-47c7-9b70-155cb05605a3\") " Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.614283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4433013-b1ca-47c7-9b70-155cb05605a3" (UID: "f4433013-b1ca-47c7-9b70-155cb05605a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.620403 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl" (OuterVolumeSpecName: "kube-api-access-9grjl") pod "f4433013-b1ca-47c7-9b70-155cb05605a3" (UID: "f4433013-b1ca-47c7-9b70-155cb05605a3"). InnerVolumeSpecName "kube-api-access-9grjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.715302 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9grjl\" (UniqueName: \"kubernetes.io/projected/f4433013-b1ca-47c7-9b70-155cb05605a3-kube-api-access-9grjl\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:35 crc kubenswrapper[4698]: I0127 14:49:35.715351 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4433013-b1ca-47c7-9b70-155cb05605a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.163705 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-qk9bt" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.163719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-qk9bt" event={"ID":"f4433013-b1ca-47c7-9b70-155cb05605a3","Type":"ContainerDied","Data":"77feaffbb2d0ccb65c468150d9f6f8058c615f496f3d38cd9d7f68d4808e39e1"} Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.163846 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77feaffbb2d0ccb65c468150d9f6f8058c615f496f3d38cd9d7f68d4808e39e1" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.483798 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.539154 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts\") pod \"2673531b-aee1-4a69-b3bb-255c3e331724\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.539292 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2cb4\" (UniqueName: \"kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4\") pod \"2673531b-aee1-4a69-b3bb-255c3e331724\" (UID: \"2673531b-aee1-4a69-b3bb-255c3e331724\") " Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.542506 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2673531b-aee1-4a69-b3bb-255c3e331724" (UID: "2673531b-aee1-4a69-b3bb-255c3e331724"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.568496 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4" (OuterVolumeSpecName: "kube-api-access-l2cb4") pod "2673531b-aee1-4a69-b3bb-255c3e331724" (UID: "2673531b-aee1-4a69-b3bb-255c3e331724"). InnerVolumeSpecName "kube-api-access-l2cb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.641356 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2673531b-aee1-4a69-b3bb-255c3e331724-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:36 crc kubenswrapper[4698]: I0127 14:49:36.641389 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2cb4\" (UniqueName: \"kubernetes.io/projected/2673531b-aee1-4a69-b3bb-255c3e331724-kube-api-access-l2cb4\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.172970 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c51b-account-create-update-ddjfr" event={"ID":"2673531b-aee1-4a69-b3bb-255c3e331724","Type":"ContainerDied","Data":"8454cf4d1bf7706460cf6179b1c4d6c1d6260b6b8c2f11e148dbb777e27ad531"} Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.173010 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8454cf4d1bf7706460cf6179b1c4d6c1d6260b6b8c2f11e148dbb777e27ad531" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.173034 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c51b-account-create-update-ddjfr" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.833491 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xdg6c"] Jan 27 14:49:37 crc kubenswrapper[4698]: E0127 14:49:37.834174 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="init" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834200 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="init" Jan 27 14:49:37 crc kubenswrapper[4698]: E0127 14:49:37.834216 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="dnsmasq-dns" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834224 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="dnsmasq-dns" Jan 27 14:49:37 crc kubenswrapper[4698]: E0127 14:49:37.834238 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2673531b-aee1-4a69-b3bb-255c3e331724" containerName="mariadb-account-create-update" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834247 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2673531b-aee1-4a69-b3bb-255c3e331724" containerName="mariadb-account-create-update" Jan 27 14:49:37 crc kubenswrapper[4698]: E0127 14:49:37.834265 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4433013-b1ca-47c7-9b70-155cb05605a3" containerName="mariadb-database-create" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834272 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4433013-b1ca-47c7-9b70-155cb05605a3" containerName="mariadb-database-create" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834464 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4433013-b1ca-47c7-9b70-155cb05605a3" containerName="mariadb-database-create" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834482 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="64738940-4d7e-484c-92f9-d6a686fd2696" containerName="dnsmasq-dns" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.834502 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2673531b-aee1-4a69-b3bb-255c3e331724" containerName="mariadb-account-create-update" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.838286 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.842740 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.854513 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xdg6c"] Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.964340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mfx\" (UniqueName: \"kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:37 crc kubenswrapper[4698]: I0127 14:49:37.964724 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:38 crc kubenswrapper[4698]: I0127 14:49:38.066863 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:38 crc kubenswrapper[4698]: I0127 14:49:38.068102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mfx\" (UniqueName: \"kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:38 crc kubenswrapper[4698]: I0127 14:49:38.068215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:38 crc kubenswrapper[4698]: I0127 14:49:38.100409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mfx\" (UniqueName: \"kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx\") pod \"root-account-create-update-xdg6c\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:38 crc kubenswrapper[4698]: I0127 14:49:38.202519 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:39 crc kubenswrapper[4698]: I0127 14:49:39.477908 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xdg6c"] Jan 27 14:49:39 crc kubenswrapper[4698]: I0127 14:49:39.598337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:39 crc kubenswrapper[4698]: E0127 14:49:39.598596 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:49:39 crc kubenswrapper[4698]: E0127 14:49:39.598652 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:49:39 crc kubenswrapper[4698]: E0127 14:49:39.598715 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift podName:f15487de-4580-4abf-a96c-3c5d364fe2d5 nodeName:}" failed. No retries permitted until 2026-01-27 14:49:55.598695409 +0000 UTC m=+1251.275472874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift") pod "swift-storage-0" (UID: "f15487de-4580-4abf-a96c-3c5d364fe2d5") : configmap "swift-ring-files" not found Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.171289 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nmrs6"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.172853 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.189002 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nmrs6"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.206205 4698 generic.go:334] "Generic (PLEG): container finished" podID="07cca8aa-7a06-43e6-8499-a9a1372a01fa" containerID="47d9c3d9bdeab2ba5b42e4b677022bb02a931cf2718ac80f3cfaa2fccf5c292b" exitCode=0 Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.206349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xdg6c" event={"ID":"07cca8aa-7a06-43e6-8499-a9a1372a01fa","Type":"ContainerDied","Data":"47d9c3d9bdeab2ba5b42e4b677022bb02a931cf2718ac80f3cfaa2fccf5c292b"} Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.206405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xdg6c" event={"ID":"07cca8aa-7a06-43e6-8499-a9a1372a01fa","Type":"ContainerStarted","Data":"1431c3d714024f015e50b9b158fa98788292f6678dcb0764ae8fd2bf52122ee2"} Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.209546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerStarted","Data":"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c"} Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.262130 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.392714236 podStartE2EDuration="58.262114249s" podCreationTimestamp="2026-01-27 14:48:42 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.238285055 +0000 UTC m=+1196.915062520" lastFinishedPulling="2026-01-27 14:49:39.107685058 +0000 UTC m=+1234.784462533" observedRunningTime="2026-01-27 14:49:40.258829703 +0000 UTC m=+1235.935607198" watchObservedRunningTime="2026-01-27 14:49:40.262114249 +0000 UTC m=+1235.938891714" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.285213 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9b23-account-create-update-jqrhb"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.286898 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.293896 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.305525 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9b23-account-create-update-jqrhb"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.316199 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnp5f\" (UniqueName: \"kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.316273 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.417495 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnp5f\" (UniqueName: \"kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.417542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.417577 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrpx5\" (UniqueName: \"kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.417662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.418484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.442032 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnp5f\" (UniqueName: \"kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f\") pod \"keystone-db-create-nmrs6\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.473433 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-l7xhq"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.474766 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.483084 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l7xhq"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.503783 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.521805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrpx5\" (UniqueName: \"kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.521937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.521975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qc2c\" (UniqueName: \"kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.522058 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.523470 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.544064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrpx5\" (UniqueName: \"kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5\") pod \"keystone-9b23-account-create-update-jqrhb\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.587006 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-25a3-account-create-update-hrqhk"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.598719 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-25a3-account-create-update-hrqhk"] Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.598812 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.601134 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.607025 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.627040 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qc2c\" (UniqueName: \"kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.627225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.630607 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.658106 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qc2c\" (UniqueName: \"kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c\") pod \"placement-db-create-l7xhq\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.737322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586tt\" (UniqueName: \"kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.737530 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.797009 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.839009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.839281 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586tt\" (UniqueName: \"kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.840562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:40 crc kubenswrapper[4698]: I0127 14:49:40.862476 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586tt\" (UniqueName: \"kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt\") pod \"placement-25a3-account-create-update-hrqhk\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.044406 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:41 crc kubenswrapper[4698]: W0127 14:49:41.082767 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7350c55d_9b7b_4bdb_a901_998578b4eea9.slice/crio-ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f WatchSource:0}: Error finding container ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f: Status 404 returned error can't find the container with id ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.086718 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nmrs6"] Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.164210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9b23-account-create-update-jqrhb"] Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.233252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nmrs6" event={"ID":"7350c55d-9b7b-4bdb-a901-998578b4eea9","Type":"ContainerStarted","Data":"ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f"} Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.235434 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b23-account-create-update-jqrhb" event={"ID":"e58bbce4-4fb7-438d-bc96-1daafb04c867","Type":"ContainerStarted","Data":"3c17fb4e826469ba4e8d01689aaccd442b749073021644d0a9cb7ecea234c2d2"} Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.366862 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l7xhq"] Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.430118 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.601161 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-25a3-account-create-update-hrqhk"] Jan 27 14:49:41 crc kubenswrapper[4698]: W0127 14:49:41.631245 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b7832af_cd21_4035_ae87_b11268ea2564.slice/crio-b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91 WatchSource:0}: Error finding container b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91: Status 404 returned error can't find the container with id b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91 Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.724015 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.865562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts\") pod \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.865945 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7mfx\" (UniqueName: \"kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx\") pod \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\" (UID: \"07cca8aa-7a06-43e6-8499-a9a1372a01fa\") " Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.866606 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07cca8aa-7a06-43e6-8499-a9a1372a01fa" (UID: "07cca8aa-7a06-43e6-8499-a9a1372a01fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.887894 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx" (OuterVolumeSpecName: "kube-api-access-z7mfx") pod "07cca8aa-7a06-43e6-8499-a9a1372a01fa" (UID: "07cca8aa-7a06-43e6-8499-a9a1372a01fa"). InnerVolumeSpecName "kube-api-access-z7mfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.968705 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cca8aa-7a06-43e6-8499-a9a1372a01fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:41 crc kubenswrapper[4698]: I0127 14:49:41.968757 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7mfx\" (UniqueName: \"kubernetes.io/projected/07cca8aa-7a06-43e6-8499-a9a1372a01fa-kube-api-access-z7mfx\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.245332 4698 generic.go:334] "Generic (PLEG): container finished" podID="e58bbce4-4fb7-438d-bc96-1daafb04c867" containerID="088e635e53119e47c2552b5b864918508f7d9725eed1c99cc7e8ec28ed1ac78a" exitCode=0 Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.245442 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b23-account-create-update-jqrhb" event={"ID":"e58bbce4-4fb7-438d-bc96-1daafb04c867","Type":"ContainerDied","Data":"088e635e53119e47c2552b5b864918508f7d9725eed1c99cc7e8ec28ed1ac78a"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.254302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xdg6c" event={"ID":"07cca8aa-7a06-43e6-8499-a9a1372a01fa","Type":"ContainerDied","Data":"1431c3d714024f015e50b9b158fa98788292f6678dcb0764ae8fd2bf52122ee2"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.254345 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1431c3d714024f015e50b9b158fa98788292f6678dcb0764ae8fd2bf52122ee2" Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.254401 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xdg6c" Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.265361 4698 generic.go:334] "Generic (PLEG): container finished" podID="92d7277e-5ed9-480d-a115-1c3568be25a1" containerID="d35e5d128cdca7e773308680b82dae70256a03a4cc3b640007ac1162c0a6ea89" exitCode=0 Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.265443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l7xhq" event={"ID":"92d7277e-5ed9-480d-a115-1c3568be25a1","Type":"ContainerDied","Data":"d35e5d128cdca7e773308680b82dae70256a03a4cc3b640007ac1162c0a6ea89"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.265476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l7xhq" event={"ID":"92d7277e-5ed9-480d-a115-1c3568be25a1","Type":"ContainerStarted","Data":"00758fd6aee1cd7c06a0e291b53b197d616918b2629a6fe4d9be8376c9598b8f"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.267173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25a3-account-create-update-hrqhk" event={"ID":"9b7832af-cd21-4035-ae87-b11268ea2564","Type":"ContainerStarted","Data":"d93332ca3041888e3eb6628b14ee06875369af7adb0c6305757a955d3deeaf6d"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.267205 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25a3-account-create-update-hrqhk" event={"ID":"9b7832af-cd21-4035-ae87-b11268ea2564","Type":"ContainerStarted","Data":"b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.283983 4698 generic.go:334] "Generic (PLEG): container finished" podID="7350c55d-9b7b-4bdb-a901-998578b4eea9" containerID="57de376067bd1c4d86bb352eade6fd5cdcb68597ace43c9e4c16e8207992a1c7" exitCode=0 Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.284170 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nmrs6" event={"ID":"7350c55d-9b7b-4bdb-a901-998578b4eea9","Type":"ContainerDied","Data":"57de376067bd1c4d86bb352eade6fd5cdcb68597ace43c9e4c16e8207992a1c7"} Jan 27 14:49:42 crc kubenswrapper[4698]: I0127 14:49:42.313041 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-25a3-account-create-update-hrqhk" podStartSLOduration=2.31301678 podStartE2EDuration="2.31301678s" podCreationTimestamp="2026-01-27 14:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:42.306599021 +0000 UTC m=+1237.983376486" watchObservedRunningTime="2026-01-27 14:49:42.31301678 +0000 UTC m=+1237.989794245" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.295319 4698 generic.go:334] "Generic (PLEG): container finished" podID="9b7832af-cd21-4035-ae87-b11268ea2564" containerID="d93332ca3041888e3eb6628b14ee06875369af7adb0c6305757a955d3deeaf6d" exitCode=0 Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.295430 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25a3-account-create-update-hrqhk" event={"ID":"9b7832af-cd21-4035-ae87-b11268ea2564","Type":"ContainerDied","Data":"d93332ca3041888e3eb6628b14ee06875369af7adb0c6305757a955d3deeaf6d"} Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.297501 4698 generic.go:334] "Generic (PLEG): container finished" podID="fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" containerID="3642ed493f2d430111ae937632c6a80c6c88eaa5a2b01f48c47ac7ee49d3f248" exitCode=0 Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.297629 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-52ttj" event={"ID":"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e","Type":"ContainerDied","Data":"3642ed493f2d430111ae937632c6a80c6c88eaa5a2b01f48c47ac7ee49d3f248"} Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.777948 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.789086 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.800244 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.897080 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xdg6c"] Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.906101 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xdg6c"] Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918053 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts\") pod \"7350c55d-9b7b-4bdb-a901-998578b4eea9\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918106 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrpx5\" (UniqueName: \"kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5\") pod \"e58bbce4-4fb7-438d-bc96-1daafb04c867\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918168 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts\") pod \"e58bbce4-4fb7-438d-bc96-1daafb04c867\" (UID: \"e58bbce4-4fb7-438d-bc96-1daafb04c867\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918197 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnp5f\" (UniqueName: \"kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f\") pod \"7350c55d-9b7b-4bdb-a901-998578b4eea9\" (UID: \"7350c55d-9b7b-4bdb-a901-998578b4eea9\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918254 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qc2c\" (UniqueName: \"kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c\") pod \"92d7277e-5ed9-480d-a115-1c3568be25a1\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.918336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts\") pod \"92d7277e-5ed9-480d-a115-1c3568be25a1\" (UID: \"92d7277e-5ed9-480d-a115-1c3568be25a1\") " Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.919595 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e58bbce4-4fb7-438d-bc96-1daafb04c867" (UID: "e58bbce4-4fb7-438d-bc96-1daafb04c867"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.919865 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58bbce4-4fb7-438d-bc96-1daafb04c867-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.920204 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7350c55d-9b7b-4bdb-a901-998578b4eea9" (UID: "7350c55d-9b7b-4bdb-a901-998578b4eea9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.920711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92d7277e-5ed9-480d-a115-1c3568be25a1" (UID: "92d7277e-5ed9-480d-a115-1c3568be25a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.925679 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c" (OuterVolumeSpecName: "kube-api-access-8qc2c") pod "92d7277e-5ed9-480d-a115-1c3568be25a1" (UID: "92d7277e-5ed9-480d-a115-1c3568be25a1"). InnerVolumeSpecName "kube-api-access-8qc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.930398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5" (OuterVolumeSpecName: "kube-api-access-zrpx5") pod "e58bbce4-4fb7-438d-bc96-1daafb04c867" (UID: "e58bbce4-4fb7-438d-bc96-1daafb04c867"). InnerVolumeSpecName "kube-api-access-zrpx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.935519 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f" (OuterVolumeSpecName: "kube-api-access-nnp5f") pod "7350c55d-9b7b-4bdb-a901-998578b4eea9" (UID: "7350c55d-9b7b-4bdb-a901-998578b4eea9"). InnerVolumeSpecName "kube-api-access-nnp5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:43 crc kubenswrapper[4698]: I0127 14:49:43.962443 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.021746 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qc2c\" (UniqueName: \"kubernetes.io/projected/92d7277e-5ed9-480d-a115-1c3568be25a1-kube-api-access-8qc2c\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.021782 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d7277e-5ed9-480d-a115-1c3568be25a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.021796 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7350c55d-9b7b-4bdb-a901-998578b4eea9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.021808 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrpx5\" (UniqueName: \"kubernetes.io/projected/e58bbce4-4fb7-438d-bc96-1daafb04c867-kube-api-access-zrpx5\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.021979 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnp5f\" (UniqueName: \"kubernetes.io/projected/7350c55d-9b7b-4bdb-a901-998578b4eea9-kube-api-access-nnp5f\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.308092 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nmrs6" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.308082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nmrs6" event={"ID":"7350c55d-9b7b-4bdb-a901-998578b4eea9","Type":"ContainerDied","Data":"ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f"} Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.308232 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce645dc59f8ef73aef53fd6f5051ed702155863baed56300534f9808b3ce5e2f" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.309809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b23-account-create-update-jqrhb" event={"ID":"e58bbce4-4fb7-438d-bc96-1daafb04c867","Type":"ContainerDied","Data":"3c17fb4e826469ba4e8d01689aaccd442b749073021644d0a9cb7ecea234c2d2"} Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.309839 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c17fb4e826469ba4e8d01689aaccd442b749073021644d0a9cb7ecea234c2d2" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.309894 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b23-account-create-update-jqrhb" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.321382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l7xhq" event={"ID":"92d7277e-5ed9-480d-a115-1c3568be25a1","Type":"ContainerDied","Data":"00758fd6aee1cd7c06a0e291b53b197d616918b2629a6fe4d9be8376c9598b8f"} Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.321441 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00758fd6aee1cd7c06a0e291b53b197d616918b2629a6fe4d9be8376c9598b8f" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.321526 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l7xhq" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.597186 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.732463 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.732869 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.732914 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.732947 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849lx\" (UniqueName: \"kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.733000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.733029 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.733096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf\") pod \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\" (UID: \"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.733465 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.734684 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.738912 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.741720 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx" (OuterVolumeSpecName: "kube-api-access-849lx") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "kube-api-access-849lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.759817 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.760421 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts" (OuterVolumeSpecName: "scripts") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.762143 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" (UID: "fd96ec0a-9e80-4f7e-b009-b83aaa6e726e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835051 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835119 4698 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835130 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835171 4698 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835185 4698 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835195 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-849lx\" (UniqueName: \"kubernetes.io/projected/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-kube-api-access-849lx\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.835205 4698 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fd96ec0a-9e80-4f7e-b009-b83aaa6e726e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.836243 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.936382 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586tt\" (UniqueName: \"kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt\") pod \"9b7832af-cd21-4035-ae87-b11268ea2564\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.936477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts\") pod \"9b7832af-cd21-4035-ae87-b11268ea2564\" (UID: \"9b7832af-cd21-4035-ae87-b11268ea2564\") " Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.936951 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b7832af-cd21-4035-ae87-b11268ea2564" (UID: "9b7832af-cd21-4035-ae87-b11268ea2564"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.937259 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7832af-cd21-4035-ae87-b11268ea2564-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:44 crc kubenswrapper[4698]: I0127 14:49:44.941441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt" (OuterVolumeSpecName: "kube-api-access-586tt") pod "9b7832af-cd21-4035-ae87-b11268ea2564" (UID: "9b7832af-cd21-4035-ae87-b11268ea2564"). InnerVolumeSpecName "kube-api-access-586tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.004231 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07cca8aa-7a06-43e6-8499-a9a1372a01fa" path="/var/lib/kubelet/pods/07cca8aa-7a06-43e6-8499-a9a1372a01fa/volumes" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.038319 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586tt\" (UniqueName: \"kubernetes.io/projected/9b7832af-cd21-4035-ae87-b11268ea2564-kube-api-access-586tt\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.330452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25a3-account-create-update-hrqhk" event={"ID":"9b7832af-cd21-4035-ae87-b11268ea2564","Type":"ContainerDied","Data":"b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91"} Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.330498 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8425271c3a1e64e1525cb6a6f8f5b140d1e4bb76a434581ec6196dbae01ef91" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.330565 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25a3-account-create-update-hrqhk" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.333777 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-52ttj" event={"ID":"fd96ec0a-9e80-4f7e-b009-b83aaa6e726e","Type":"ContainerDied","Data":"5c01e2ae8e3600e61e7c043602778616645fe8bc104c6708bb0a26cb2855dfc1"} Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.333817 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c01e2ae8e3600e61e7c043602778616645fe8bc104c6708bb0a26cb2855dfc1" Jan 27 14:49:45 crc kubenswrapper[4698]: I0127 14:49:45.333875 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-52ttj" Jan 27 14:49:46 crc kubenswrapper[4698]: I0127 14:49:46.430056 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:46 crc kubenswrapper[4698]: I0127 14:49:46.436569 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:46 crc kubenswrapper[4698]: I0127 14:49:46.640860 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7swbz" podUID="f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:49:46 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:49:46 crc kubenswrapper[4698]: > Jan 27 14:49:47 crc kubenswrapper[4698]: I0127 14:49:47.349484 4698 generic.go:334] "Generic (PLEG): container finished" podID="764b6b7b-3664-40e6-a24b-dc0f9db827db" containerID="581af515f0476829ce603fe1c8555dd8bb4e19b489fc159e1a8ed2f59811c5e5" exitCode=0 Jan 27 14:49:47 crc kubenswrapper[4698]: I0127 14:49:47.349524 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"764b6b7b-3664-40e6-a24b-dc0f9db827db","Type":"ContainerDied","Data":"581af515f0476829ce603fe1c8555dd8bb4e19b489fc159e1a8ed2f59811c5e5"} Jan 27 14:49:47 crc kubenswrapper[4698]: I0127 14:49:47.351047 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.360410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"764b6b7b-3664-40e6-a24b-dc0f9db827db","Type":"ContainerStarted","Data":"d2bcb22d0192415ed09b45d866e561fba5e07a49116ee0c33e1230c55a0bd2f5"} Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.361167 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.363620 4698 generic.go:334] "Generic (PLEG): container finished" podID="5d6f607c-3a31-4135-9eb4-3193e722d112" containerID="8b10145eea493ec749905b9ddc64f9a97a043a0f6550bbe1b6c6bdd5fd7bfd58" exitCode=0 Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.363666 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5d6f607c-3a31-4135-9eb4-3193e722d112","Type":"ContainerDied","Data":"8b10145eea493ec749905b9ddc64f9a97a043a0f6550bbe1b6c6bdd5fd7bfd58"} Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.455868 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=62.769560719 podStartE2EDuration="1m13.45584244s" podCreationTimestamp="2026-01-27 14:48:35 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.687019622 +0000 UTC m=+1197.363797087" lastFinishedPulling="2026-01-27 14:49:12.373301343 +0000 UTC m=+1208.050078808" observedRunningTime="2026-01-27 14:49:48.438988106 +0000 UTC m=+1244.115765581" watchObservedRunningTime="2026-01-27 14:49:48.45584244 +0000 UTC m=+1244.132619905" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.945931 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r8hzm"] Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946338 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07cca8aa-7a06-43e6-8499-a9a1372a01fa" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946351 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="07cca8aa-7a06-43e6-8499-a9a1372a01fa" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946364 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7832af-cd21-4035-ae87-b11268ea2564" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946370 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7832af-cd21-4035-ae87-b11268ea2564" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946389 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d7277e-5ed9-480d-a115-1c3568be25a1" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946397 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d7277e-5ed9-480d-a115-1c3568be25a1" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946406 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" containerName="swift-ring-rebalance" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946413 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" containerName="swift-ring-rebalance" Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946427 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58bbce4-4fb7-438d-bc96-1daafb04c867" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946432 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58bbce4-4fb7-438d-bc96-1daafb04c867" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: E0127 14:49:48.946447 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7350c55d-9b7b-4bdb-a901-998578b4eea9" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946453 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7350c55d-9b7b-4bdb-a901-998578b4eea9" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946628 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7350c55d-9b7b-4bdb-a901-998578b4eea9" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946659 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="07cca8aa-7a06-43e6-8499-a9a1372a01fa" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946668 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7832af-cd21-4035-ae87-b11268ea2564" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946682 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58bbce4-4fb7-438d-bc96-1daafb04c867" containerName="mariadb-account-create-update" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946692 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd96ec0a-9e80-4f7e-b009-b83aaa6e726e" containerName="swift-ring-rebalance" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.946702 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d7277e-5ed9-480d-a115-1c3568be25a1" containerName="mariadb-database-create" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.947398 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.949085 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 14:49:48 crc kubenswrapper[4698]: I0127 14:49:48.955253 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r8hzm"] Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.104818 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.104873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4gl\" (UniqueName: \"kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.206337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.206397 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp4gl\" (UniqueName: \"kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.207183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.224395 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp4gl\" (UniqueName: \"kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl\") pod \"root-account-create-update-r8hzm\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.269555 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.374061 4698 generic.go:334] "Generic (PLEG): container finished" podID="c686e168-f607-4b7f-a81d-f33ac8bdf513" containerID="bddaebda662a6d871fff02dfad71a498259b13e0f29d6a737909aa266958ebc4" exitCode=0 Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.374121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c686e168-f607-4b7f-a81d-f33ac8bdf513","Type":"ContainerDied","Data":"bddaebda662a6d871fff02dfad71a498259b13e0f29d6a737909aa266958ebc4"} Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.380352 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5d6f607c-3a31-4135-9eb4-3193e722d112","Type":"ContainerStarted","Data":"48ff2cfa2e2905123e8640fd906916eff0334ec2f7ea51184373394303e4eaac"} Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.381053 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.502815 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=61.217641354 podStartE2EDuration="1m13.502611026s" podCreationTimestamp="2026-01-27 14:48:36 +0000 UTC" firstStartedPulling="2026-01-27 14:49:00.751556746 +0000 UTC m=+1196.428334211" lastFinishedPulling="2026-01-27 14:49:13.036526408 +0000 UTC m=+1208.713303883" observedRunningTime="2026-01-27 14:49:49.439294959 +0000 UTC m=+1245.116072444" watchObservedRunningTime="2026-01-27 14:49:49.502611026 +0000 UTC m=+1245.179388491" Jan 27 14:49:49 crc kubenswrapper[4698]: I0127 14:49:49.812112 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r8hzm"] Jan 27 14:49:49 crc kubenswrapper[4698]: W0127 14:49:49.812814 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835737cf_874a_4c2d_9f03_62cd9cd42d23.slice/crio-417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8 WatchSource:0}: Error finding container 417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8: Status 404 returned error can't find the container with id 417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8 Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.389511 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r8hzm" event={"ID":"835737cf-874a-4c2d-9f03-62cd9cd42d23","Type":"ContainerStarted","Data":"8567306808b79cf9391cb3a91d67cdc3d03165cc15c094566351ad57bea18fe5"} Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.389936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r8hzm" event={"ID":"835737cf-874a-4c2d-9f03-62cd9cd42d23","Type":"ContainerStarted","Data":"417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8"} Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.391563 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c686e168-f607-4b7f-a81d-f33ac8bdf513","Type":"ContainerStarted","Data":"ffc759bc3e8336095230c0ea6835219b84461aafc06b8b8ea8d6476c00dd23bc"} Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.391921 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.415511 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-r8hzm" podStartSLOduration=2.415486607 podStartE2EDuration="2.415486607s" podCreationTimestamp="2026-01-27 14:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:50.411865201 +0000 UTC m=+1246.088642666" watchObservedRunningTime="2026-01-27 14:49:50.415486607 +0000 UTC m=+1246.092264072" Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.460201 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=64.079960848 podStartE2EDuration="1m15.460177394s" podCreationTimestamp="2026-01-27 14:48:35 +0000 UTC" firstStartedPulling="2026-01-27 14:49:01.661160921 +0000 UTC m=+1197.337938386" lastFinishedPulling="2026-01-27 14:49:13.041377457 +0000 UTC m=+1208.718154932" observedRunningTime="2026-01-27 14:49:50.457230176 +0000 UTC m=+1246.134007641" watchObservedRunningTime="2026-01-27 14:49:50.460177394 +0000 UTC m=+1246.136954859" Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.486336 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.486658 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="prometheus" containerID="cri-o://5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" gracePeriod=600 Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.486695 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="thanos-sidecar" containerID="cri-o://70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" gracePeriod=600 Jan 27 14:49:50 crc kubenswrapper[4698]: I0127 14:49:50.486706 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="config-reloader" containerID="cri-o://00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" gracePeriod=600 Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.125924 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.179905 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cn5z6" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.291205 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.413402 4698 generic.go:334] "Generic (PLEG): container finished" podID="835737cf-874a-4c2d-9f03-62cd9cd42d23" containerID="8567306808b79cf9391cb3a91d67cdc3d03165cc15c094566351ad57bea18fe5" exitCode=0 Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.413467 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r8hzm" event={"ID":"835737cf-874a-4c2d-9f03-62cd9cd42d23","Type":"ContainerDied","Data":"8567306808b79cf9391cb3a91d67cdc3d03165cc15c094566351ad57bea18fe5"} Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.429188 4698 generic.go:334] "Generic (PLEG): container finished" podID="03d06c18-e82f-417e-b3bd-6365030bee53" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" exitCode=0 Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.430567 4698 generic.go:334] "Generic (PLEG): container finished" podID="03d06c18-e82f-417e-b3bd-6365030bee53" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" exitCode=0 Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.430645 4698 generic.go:334] "Generic (PLEG): container finished" podID="03d06c18-e82f-417e-b3bd-6365030bee53" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" exitCode=0 Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.431941 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.432029 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerDied","Data":"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c"} Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.432114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerDied","Data":"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533"} Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.432130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerDied","Data":"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281"} Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.432149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03d06c18-e82f-417e-b3bd-6365030bee53","Type":"ContainerDied","Data":"4d2d4a587db2562d575f706211d8aad861ccf9d6087e984140b9ab1f919654f8"} Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.432173 4698 scope.go:117] "RemoveContainer" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.466113 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.466690 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.466974 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467088 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467439 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467821 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxgk\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.467942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0\") pod \"03d06c18-e82f-417e-b3bd-6365030bee53\" (UID: \"03d06c18-e82f-417e-b3bd-6365030bee53\") " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.469732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.471077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.475302 4698 scope.go:117] "RemoveContainer" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.478510 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.480476 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.481096 4698 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.481196 4698 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.481292 4698 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.481373 4698 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03d06c18-e82f-417e-b3bd-6365030bee53-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.495830 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk" (OuterVolumeSpecName: "kube-api-access-vhxgk") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "kube-api-access-vhxgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.496393 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.502783 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out" (OuterVolumeSpecName: "config-out") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.519394 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config" (OuterVolumeSpecName: "config") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.543813 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config" (OuterVolumeSpecName: "web-config") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.556161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "03d06c18-e82f-417e-b3bd-6365030bee53" (UID: "03d06c18-e82f-417e-b3bd-6365030bee53"). InnerVolumeSpecName "pvc-7c238611-e381-428d-ba0c-da485ec04e87". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583111 4698 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583157 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583189 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") on node \"crc\" " Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583206 4698 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03d06c18-e82f-417e-b3bd-6365030bee53-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583222 4698 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03d06c18-e82f-417e-b3bd-6365030bee53-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.583239 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhxgk\" (UniqueName: \"kubernetes.io/projected/03d06c18-e82f-417e-b3bd-6365030bee53-kube-api-access-vhxgk\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.623991 4698 scope.go:117] "RemoveContainer" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.634837 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7swbz-config-x7d8f"] Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.635252 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="prometheus" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635264 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="prometheus" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.635279 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="thanos-sidecar" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635285 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="thanos-sidecar" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.635293 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="init-config-reloader" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635299 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="init-config-reloader" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.635316 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="config-reloader" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635322 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="config-reloader" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635526 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="config-reloader" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635548 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="prometheus" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.635558 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" containerName="thanos-sidecar" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.636108 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.658595 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.677136 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz-config-x7d8f"] Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.694123 4698 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.694539 4698 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7c238611-e381-428d-ba0c-da485ec04e87" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87") on node "crc" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.697742 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7swbz" podUID="f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:49:51 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:49:51 crc kubenswrapper[4698]: > Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.751954 4698 scope.go:117] "RemoveContainer" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.773773 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9f7x\" (UniqueName: \"kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792163 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792320 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.792488 4698 reconciler_common.go:293] "Volume detached for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.795231 4698 scope.go:117] "RemoveContainer" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.795987 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": container with ID starting with 70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c not found: ID does not exist" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796024 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c"} err="failed to get container status \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": rpc error: code = NotFound desc = could not find container \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": container with ID starting with 70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796050 4698 scope.go:117] "RemoveContainer" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.796437 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": container with ID starting with 00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533 not found: ID does not exist" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796459 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533"} err="failed to get container status \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": rpc error: code = NotFound desc = could not find container \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": container with ID starting with 00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796484 4698 scope.go:117] "RemoveContainer" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.796698 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": container with ID starting with 5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281 not found: ID does not exist" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796719 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281"} err="failed to get container status \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": rpc error: code = NotFound desc = could not find container \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": container with ID starting with 5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796743 4698 scope.go:117] "RemoveContainer" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" Jan 27 14:49:51 crc kubenswrapper[4698]: E0127 14:49:51.796928 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": container with ID starting with 31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58 not found: ID does not exist" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796948 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58"} err="failed to get container status \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": rpc error: code = NotFound desc = could not find container \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": container with ID starting with 31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.796975 4698 scope.go:117] "RemoveContainer" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797164 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c"} err="failed to get container status \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": rpc error: code = NotFound desc = could not find container \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": container with ID starting with 70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797183 4698 scope.go:117] "RemoveContainer" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797386 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533"} err="failed to get container status \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": rpc error: code = NotFound desc = could not find container \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": container with ID starting with 00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797407 4698 scope.go:117] "RemoveContainer" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797602 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281"} err="failed to get container status \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": rpc error: code = NotFound desc = could not find container \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": container with ID starting with 5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797620 4698 scope.go:117] "RemoveContainer" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797834 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58"} err="failed to get container status \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": rpc error: code = NotFound desc = could not find container \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": container with ID starting with 31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.797874 4698 scope.go:117] "RemoveContainer" containerID="70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798058 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c"} err="failed to get container status \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": rpc error: code = NotFound desc = could not find container \"70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c\": container with ID starting with 70d70241edc485517e2accdceba323eb62510f58fdec600ff06ffddd74f3231c not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798072 4698 scope.go:117] "RemoveContainer" containerID="00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798272 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533"} err="failed to get container status \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": rpc error: code = NotFound desc = could not find container \"00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533\": container with ID starting with 00617f05f323c99e0351309fe951b81fc7b3e739e068218aaab0bfa4a9a56533 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798289 4698 scope.go:117] "RemoveContainer" containerID="5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798504 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281"} err="failed to get container status \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": rpc error: code = NotFound desc = could not find container \"5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281\": container with ID starting with 5b5a465ed67389e53b2512cb71e47ddda2e7ad448de9a0e9c62f170c48b36281 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798518 4698 scope.go:117] "RemoveContainer" containerID="31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.798734 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58"} err="failed to get container status \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": rpc error: code = NotFound desc = could not find container \"31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58\": container with ID starting with 31d868c1e1393239086f38c63c9950a69ffbfbc96fdfdc18edfdc5dc6bab0a58 not found: ID does not exist" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.800380 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.818574 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.821106 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.826019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.826213 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.826316 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.826437 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.832274 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.832419 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tg7cd" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.832518 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.834047 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.838120 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.855626 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.894500 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.894934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.895307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.896781 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897416 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897735 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898196 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhlmt\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-kube-api-access-mhlmt\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9f7x\" (UniqueName: \"kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898893 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898992 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.899106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.899316 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.899409 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.898014 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.897770 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.900521 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.900705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.902201 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:51 crc kubenswrapper[4698]: I0127 14:49:51.952663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9f7x\" (UniqueName: \"kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x\") pod \"ovn-controller-7swbz-config-x7d8f\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001188 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001308 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001330 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001380 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001415 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhlmt\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-kube-api-access-mhlmt\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001504 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001529 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.001545 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.002271 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.005120 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.005160 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/416585bac8fd590967dd124189b0e9e15cec9d1b1d071795cce77f5a8944215e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.005954 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.006111 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.006689 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.006816 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/19218e14-04c7-40a9-b2e7-2873e9cdbe82-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.007772 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.007912 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.009012 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.009115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.009507 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-config\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.010555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19218e14-04c7-40a9-b2e7-2873e9cdbe82-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.021251 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.026791 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhlmt\" (UniqueName: \"kubernetes.io/projected/19218e14-04c7-40a9-b2e7-2873e9cdbe82-kube-api-access-mhlmt\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.069057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c238611-e381-428d-ba0c-da485ec04e87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c238611-e381-428d-ba0c-da485ec04e87\") pod \"prometheus-metric-storage-0\" (UID: \"19218e14-04c7-40a9-b2e7-2873e9cdbe82\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.151449 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.597698 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz-config-x7d8f"] Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.746866 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:49:52 crc kubenswrapper[4698]: W0127 14:49:52.750446 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19218e14_04c7_40a9_b2e7_2873e9cdbe82.slice/crio-089da02770e4dfac274b9a7b26f8f2852574eda0887a3d6dbe11e82c66f8bae2 WatchSource:0}: Error finding container 089da02770e4dfac274b9a7b26f8f2852574eda0887a3d6dbe11e82c66f8bae2: Status 404 returned error can't find the container with id 089da02770e4dfac274b9a7b26f8f2852574eda0887a3d6dbe11e82c66f8bae2 Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.822511 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.918322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts\") pod \"835737cf-874a-4c2d-9f03-62cd9cd42d23\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.918765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp4gl\" (UniqueName: \"kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl\") pod \"835737cf-874a-4c2d-9f03-62cd9cd42d23\" (UID: \"835737cf-874a-4c2d-9f03-62cd9cd42d23\") " Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.919174 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "835737cf-874a-4c2d-9f03-62cd9cd42d23" (UID: "835737cf-874a-4c2d-9f03-62cd9cd42d23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:52 crc kubenswrapper[4698]: I0127 14:49:52.925502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl" (OuterVolumeSpecName: "kube-api-access-zp4gl") pod "835737cf-874a-4c2d-9f03-62cd9cd42d23" (UID: "835737cf-874a-4c2d-9f03-62cd9cd42d23"). InnerVolumeSpecName "kube-api-access-zp4gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.002408 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d06c18-e82f-417e-b3bd-6365030bee53" path="/var/lib/kubelet/pods/03d06c18-e82f-417e-b3bd-6365030bee53/volumes" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.020495 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp4gl\" (UniqueName: \"kubernetes.io/projected/835737cf-874a-4c2d-9f03-62cd9cd42d23-kube-api-access-zp4gl\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.020538 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/835737cf-874a-4c2d-9f03-62cd9cd42d23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.448846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r8hzm" event={"ID":"835737cf-874a-4c2d-9f03-62cd9cd42d23","Type":"ContainerDied","Data":"417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8"} Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.448897 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417f254f0bb29d9e307d8d0cb4ca7d8936f5d1050d6349962e3ec11be1f753c8" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.448903 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r8hzm" Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.450514 4698 generic.go:334] "Generic (PLEG): container finished" podID="2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" containerID="7e9758bb21455290a6b98f870e0f067c6ce3ef0bbfee7760cd9266179366446f" exitCode=0 Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.450572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-x7d8f" event={"ID":"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0","Type":"ContainerDied","Data":"7e9758bb21455290a6b98f870e0f067c6ce3ef0bbfee7760cd9266179366446f"} Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.450590 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-x7d8f" event={"ID":"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0","Type":"ContainerStarted","Data":"39054b19c8307e8f9dd7ed436b6d1e362b9c0460047890e84ac38a641953fd21"} Jan 27 14:49:53 crc kubenswrapper[4698]: I0127 14:49:53.452735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerStarted","Data":"089da02770e4dfac274b9a7b26f8f2852574eda0887a3d6dbe11e82c66f8bae2"} Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.832236 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953691 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953768 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953758 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run" (OuterVolumeSpecName: "var-run") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9f7x\" (UniqueName: \"kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.953995 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts\") pod \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\" (UID: \"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0\") " Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.954052 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.954328 4698 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.954354 4698 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.954372 4698 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.954513 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:54 crc kubenswrapper[4698]: I0127 14:49:54.955119 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts" (OuterVolumeSpecName: "scripts") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.013841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x" (OuterVolumeSpecName: "kube-api-access-l9f7x") pod "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" (UID: "2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0"). InnerVolumeSpecName "kube-api-access-l9f7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.056158 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9f7x\" (UniqueName: \"kubernetes.io/projected/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-kube-api-access-l9f7x\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.056196 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.056211 4698 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.473407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerStarted","Data":"caf5ebae0535fb44d3c50f1103c2c8432de9459a1fe60bec692c297979d10bab"} Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.475484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-x7d8f" event={"ID":"2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0","Type":"ContainerDied","Data":"39054b19c8307e8f9dd7ed436b6d1e362b9c0460047890e84ac38a641953fd21"} Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.475535 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39054b19c8307e8f9dd7ed436b6d1e362b9c0460047890e84ac38a641953fd21" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.475539 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-x7d8f" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.667613 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.688572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f15487de-4580-4abf-a96c-3c5d364fe2d5-etc-swift\") pod \"swift-storage-0\" (UID: \"f15487de-4580-4abf-a96c-3c5d364fe2d5\") " pod="openstack/swift-storage-0" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.814236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.966808 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7swbz-config-x7d8f"] Jan 27 14:49:55 crc kubenswrapper[4698]: I0127 14:49:55.973677 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7swbz-config-x7d8f"] Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.156393 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7swbz-config-g8jhw"] Jan 27 14:49:56 crc kubenswrapper[4698]: E0127 14:49:56.157975 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" containerName="ovn-config" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.158003 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" containerName="ovn-config" Jan 27 14:49:56 crc kubenswrapper[4698]: E0127 14:49:56.158021 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835737cf-874a-4c2d-9f03-62cd9cd42d23" containerName="mariadb-account-create-update" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.158030 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="835737cf-874a-4c2d-9f03-62cd9cd42d23" containerName="mariadb-account-create-update" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.158256 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" containerName="ovn-config" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.158281 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="835737cf-874a-4c2d-9f03-62cd9cd42d23" containerName="mariadb-account-create-update" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.159043 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.166704 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz-config-g8jhw"] Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.175991 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277151 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvckq\" (UniqueName: \"kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.277582 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.379795 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.379908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.379945 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvckq\" (UniqueName: \"kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380194 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380226 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.380336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.381110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.382420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.400828 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvckq\" (UniqueName: \"kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq\") pod \"ovn-controller-7swbz-config-g8jhw\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.455445 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:49:56 crc kubenswrapper[4698]: W0127 14:49:56.466740 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf15487de_4580_4abf_a96c_3c5d364fe2d5.slice/crio-a59daca2ae9cc5ab48c3743cd60cb15d23fde603defb37d7fd2f755f9ff18e82 WatchSource:0}: Error finding container a59daca2ae9cc5ab48c3743cd60cb15d23fde603defb37d7fd2f755f9ff18e82: Status 404 returned error can't find the container with id a59daca2ae9cc5ab48c3743cd60cb15d23fde603defb37d7fd2f755f9ff18e82 Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.479894 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.485578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"a59daca2ae9cc5ab48c3743cd60cb15d23fde603defb37d7fd2f755f9ff18e82"} Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.653988 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7swbz" Jan 27 14:49:56 crc kubenswrapper[4698]: I0127 14:49:56.981434 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7swbz-config-g8jhw"] Jan 27 14:49:57 crc kubenswrapper[4698]: I0127 14:49:57.018494 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0" path="/var/lib/kubelet/pods/2e8bfbf5-6777-4cde-b7b5-f227d88dbbc0/volumes" Jan 27 14:49:57 crc kubenswrapper[4698]: I0127 14:49:57.296879 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="764b6b7b-3664-40e6-a24b-dc0f9db827db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 27 14:49:57 crc kubenswrapper[4698]: I0127 14:49:57.495660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-g8jhw" event={"ID":"845aeebd-a55d-44df-80e4-641c7c55b1fc","Type":"ContainerStarted","Data":"89cea3b56d5b01b1837afccfbd5ce8d8695a14935aa466553ed2e273482adac3"} Jan 27 14:49:57 crc kubenswrapper[4698]: I0127 14:49:57.496161 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-g8jhw" event={"ID":"845aeebd-a55d-44df-80e4-641c7c55b1fc","Type":"ContainerStarted","Data":"c08b37ab317b1feadc24034371d5a3da00035fb88d16850b1692ca0e62cc4041"} Jan 27 14:49:57 crc kubenswrapper[4698]: I0127 14:49:57.529314 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7swbz-config-g8jhw" podStartSLOduration=1.529275497 podStartE2EDuration="1.529275497s" podCreationTimestamp="2026-01-27 14:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:49:57.527531702 +0000 UTC m=+1253.204309197" watchObservedRunningTime="2026-01-27 14:49:57.529275497 +0000 UTC m=+1253.206052962" Jan 27 14:49:58 crc kubenswrapper[4698]: I0127 14:49:58.506135 4698 generic.go:334] "Generic (PLEG): container finished" podID="845aeebd-a55d-44df-80e4-641c7c55b1fc" containerID="89cea3b56d5b01b1837afccfbd5ce8d8695a14935aa466553ed2e273482adac3" exitCode=0 Jan 27 14:49:58 crc kubenswrapper[4698]: I0127 14:49:58.506195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7swbz-config-g8jhw" event={"ID":"845aeebd-a55d-44df-80e4-641c7c55b1fc","Type":"ContainerDied","Data":"89cea3b56d5b01b1837afccfbd5ce8d8695a14935aa466553ed2e273482adac3"} Jan 27 14:49:58 crc kubenswrapper[4698]: I0127 14:49:58.508833 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"8dbc190d578144d7160e2af53dcbe24f56abbecfab21bc1fdc742f4b57aee04b"} Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.528632 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"ddb6224fbec9214e122874ef42b9100d3c8cd0c75ebe017c4a2cdeffd9e9502c"} Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.529234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"9fb645cf8b9169c2963c5bba13325f10de8482f3b2caa53112e3c140bfbe9890"} Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.529255 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"7e91bf4ef69517a102a1e9a02e84ccb660d4898b02e449dfd571a5fc14571f58"} Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.830999 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.948839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.948933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.948975 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvckq\" (UniqueName: \"kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949116 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run" (OuterVolumeSpecName: "var-run") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949163 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn\") pod \"845aeebd-a55d-44df-80e4-641c7c55b1fc\" (UID: \"845aeebd-a55d-44df-80e4-641c7c55b1fc\") " Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949313 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949802 4698 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949828 4698 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.949837 4698 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/845aeebd-a55d-44df-80e4-641c7c55b1fc-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.950023 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.950230 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts" (OuterVolumeSpecName: "scripts") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:49:59 crc kubenswrapper[4698]: I0127 14:49:59.954125 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq" (OuterVolumeSpecName: "kube-api-access-mvckq") pod "845aeebd-a55d-44df-80e4-641c7c55b1fc" (UID: "845aeebd-a55d-44df-80e4-641c7c55b1fc"). InnerVolumeSpecName "kube-api-access-mvckq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.045493 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7swbz-config-g8jhw"] Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.051384 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvckq\" (UniqueName: \"kubernetes.io/projected/845aeebd-a55d-44df-80e4-641c7c55b1fc-kube-api-access-mvckq\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.051415 4698 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.051424 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845aeebd-a55d-44df-80e4-641c7c55b1fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.053233 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7swbz-config-g8jhw"] Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.542367 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08b37ab317b1feadc24034371d5a3da00035fb88d16850b1692ca0e62cc4041" Jan 27 14:50:00 crc kubenswrapper[4698]: I0127 14:50:00.542454 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7swbz-config-g8jhw" Jan 27 14:50:01 crc kubenswrapper[4698]: I0127 14:50:01.001850 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845aeebd-a55d-44df-80e4-641c7c55b1fc" path="/var/lib/kubelet/pods/845aeebd-a55d-44df-80e4-641c7c55b1fc/volumes" Jan 27 14:50:01 crc kubenswrapper[4698]: I0127 14:50:01.551577 4698 generic.go:334] "Generic (PLEG): container finished" podID="19218e14-04c7-40a9-b2e7-2873e9cdbe82" containerID="caf5ebae0535fb44d3c50f1103c2c8432de9459a1fe60bec692c297979d10bab" exitCode=0 Jan 27 14:50:01 crc kubenswrapper[4698]: I0127 14:50:01.551660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerDied","Data":"caf5ebae0535fb44d3c50f1103c2c8432de9459a1fe60bec692c297979d10bab"} Jan 27 14:50:01 crc kubenswrapper[4698]: I0127 14:50:01.576387 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"327deed45ad1859cef9928609ee2896e17589068f10dc9525a279d2137c4b19a"} Jan 27 14:50:01 crc kubenswrapper[4698]: I0127 14:50:01.576435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"7a5edf1128294962a3f6838b45a8948b937fffaab48a31c56afa3d9ea70b7cc7"} Jan 27 14:50:02 crc kubenswrapper[4698]: I0127 14:50:02.588401 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"87d0ab60f4acbbd6cbc2be1c965fce1856a99ae09f85e8f725b53aa1ebed2c08"} Jan 27 14:50:02 crc kubenswrapper[4698]: I0127 14:50:02.588820 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"236cf95251779f57c7782dc02065abc0fa2c86d601664e4b3841f70edf1073fd"} Jan 27 14:50:02 crc kubenswrapper[4698]: I0127 14:50:02.592047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerStarted","Data":"97a2f67c7859b280280a4ad6fa3ba60fea09da916971a79837e4713f859d33fd"} Jan 27 14:50:03 crc kubenswrapper[4698]: I0127 14:50:03.608760 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"7e21e0089a0e0f8600bd6701539031369027405ded8eb449e4c16a6089410f15"} Jan 27 14:50:03 crc kubenswrapper[4698]: I0127 14:50:03.609098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"76533c1dc05cb675f7843d9e16fb671b3fddbc7b22f356aa378e37c521b60c59"} Jan 27 14:50:03 crc kubenswrapper[4698]: I0127 14:50:03.609112 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"fe076ade9f2fab55fbe3ba64bf7203496d6725a12cb7e4e0ceeb40bb555e7712"} Jan 27 14:50:04 crc kubenswrapper[4698]: I0127 14:50:04.624438 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"69ebbedbbf4f024a8915819ea3e062503e4388af041df9273aa03679cb892b58"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.650429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"c18044b4e87d536bf273840ba94f2a026abeba28998047bae27d6c57dccebf7f"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.651103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"58264e83f72ed071185cec51fe327e1ae333a4d3ee08a8b5f2cdb79df44697c3"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.651120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f15487de-4580-4abf-a96c-3c5d364fe2d5","Type":"ContainerStarted","Data":"37f1d0eb5e4183810df13923d8002b928c8fbd8bec692b516d4dd41d123465c7"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.656432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerStarted","Data":"74d3cac75b26e6b8f7244249a9750d6f443bd53361ec19a1db28ece6563c0176"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.656477 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19218e14-04c7-40a9-b2e7-2873e9cdbe82","Type":"ContainerStarted","Data":"91b76844a958adc88e7f07432e48c84568282d279f7b4a317f8bda0e1a785ff1"} Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.750900 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.750879862 podStartE2EDuration="14.750879862s" podCreationTimestamp="2026-01-27 14:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:05.747595284 +0000 UTC m=+1261.424372769" watchObservedRunningTime="2026-01-27 14:50:05.750879862 +0000 UTC m=+1261.427657327" Jan 27 14:50:05 crc kubenswrapper[4698]: I0127 14:50:05.753147 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.374255704 podStartE2EDuration="43.75313386s" podCreationTimestamp="2026-01-27 14:49:22 +0000 UTC" firstStartedPulling="2026-01-27 14:49:56.470300229 +0000 UTC m=+1252.147077694" lastFinishedPulling="2026-01-27 14:50:02.849178385 +0000 UTC m=+1258.525955850" observedRunningTime="2026-01-27 14:50:05.712924792 +0000 UTC m=+1261.389702257" watchObservedRunningTime="2026-01-27 14:50:05.75313386 +0000 UTC m=+1261.429911325" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.026730 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:50:06 crc kubenswrapper[4698]: E0127 14:50:06.027548 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845aeebd-a55d-44df-80e4-641c7c55b1fc" containerName="ovn-config" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.027682 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="845aeebd-a55d-44df-80e4-641c7c55b1fc" containerName="ovn-config" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.028005 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="845aeebd-a55d-44df-80e4-641c7c55b1fc" containerName="ovn-config" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.030447 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.033826 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.066037 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.180779 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.181057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.181190 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.181294 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5jq\" (UniqueName: \"kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.181447 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.181550 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283170 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283282 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5jq\" (UniqueName: \"kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.283392 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.284305 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.284896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.285935 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.286631 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.287208 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.307219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5jq\" (UniqueName: \"kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq\") pod \"dnsmasq-dns-544c49b98f-4lvts\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.351367 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.816387 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:50:06 crc kubenswrapper[4698]: I0127 14:50:06.968384 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c686e168-f607-4b7f-a81d-f33ac8bdf513" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.153397 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.153783 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.163137 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.294383 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="764b6b7b-3664-40e6-a24b-dc0f9db827db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.563309 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="5d6f607c-3a31-4135-9eb4-3193e722d112" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.676544 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerID="fc17424f90b89f4fee6ff4e08da14ecfebdb310c748b905fb17be04a5b571a97" exitCode=0 Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.676853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" event={"ID":"ee0893f2-f99d-4923-a85e-c0d764abff34","Type":"ContainerDied","Data":"fc17424f90b89f4fee6ff4e08da14ecfebdb310c748b905fb17be04a5b571a97"} Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.676930 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" event={"ID":"ee0893f2-f99d-4923-a85e-c0d764abff34","Type":"ContainerStarted","Data":"b286c807651340722eb50523551818a39bf96fc1b45b035fea24a1f566f6d49c"} Jan 27 14:50:07 crc kubenswrapper[4698]: I0127 14:50:07.689054 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:50:08 crc kubenswrapper[4698]: I0127 14:50:08.697925 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" event={"ID":"ee0893f2-f99d-4923-a85e-c0d764abff34","Type":"ContainerStarted","Data":"843cbe300a2ca1e4f60afb2bcfdf54ac90525e46c383a5face8ae0f7e054c2fc"} Jan 27 14:50:08 crc kubenswrapper[4698]: I0127 14:50:08.727313 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podStartSLOduration=2.727288765 podStartE2EDuration="2.727288765s" podCreationTimestamp="2026-01-27 14:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:08.720339642 +0000 UTC m=+1264.397117117" watchObservedRunningTime="2026-01-27 14:50:08.727288765 +0000 UTC m=+1264.404066240" Jan 27 14:50:09 crc kubenswrapper[4698]: I0127 14:50:09.705556 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.353689 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.409949 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.410185 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="dnsmasq-dns" containerID="cri-o://63ca572b4b74f0f96d4538afa46128810ee1b526e454d7a6bd7f0a30c91927bd" gracePeriod=10 Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.764004 4698 generic.go:334] "Generic (PLEG): container finished" podID="711fbf65-f112-4da8-8475-534064efe051" containerID="63ca572b4b74f0f96d4538afa46128810ee1b526e454d7a6bd7f0a30c91927bd" exitCode=0 Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.764272 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" event={"ID":"711fbf65-f112-4da8-8475-534064efe051","Type":"ContainerDied","Data":"63ca572b4b74f0f96d4538afa46128810ee1b526e454d7a6bd7f0a30c91927bd"} Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.894240 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.955724 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb\") pod \"711fbf65-f112-4da8-8475-534064efe051\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.955810 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjhln\" (UniqueName: \"kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln\") pod \"711fbf65-f112-4da8-8475-534064efe051\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.955839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc\") pod \"711fbf65-f112-4da8-8475-534064efe051\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.955866 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb\") pod \"711fbf65-f112-4da8-8475-534064efe051\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.955888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config\") pod \"711fbf65-f112-4da8-8475-534064efe051\" (UID: \"711fbf65-f112-4da8-8475-534064efe051\") " Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.962487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln" (OuterVolumeSpecName: "kube-api-access-pjhln") pod "711fbf65-f112-4da8-8475-534064efe051" (UID: "711fbf65-f112-4da8-8475-534064efe051"). InnerVolumeSpecName "kube-api-access-pjhln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:16 crc kubenswrapper[4698]: I0127 14:50:16.968930 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.032579 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "711fbf65-f112-4da8-8475-534064efe051" (UID: "711fbf65-f112-4da8-8475-534064efe051"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.041313 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "711fbf65-f112-4da8-8475-534064efe051" (UID: "711fbf65-f112-4da8-8475-534064efe051"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.044972 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "711fbf65-f112-4da8-8475-534064efe051" (UID: "711fbf65-f112-4da8-8475-534064efe051"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.057481 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.057518 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjhln\" (UniqueName: \"kubernetes.io/projected/711fbf65-f112-4da8-8475-534064efe051-kube-api-access-pjhln\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.057532 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.057543 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.071876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config" (OuterVolumeSpecName: "config") pod "711fbf65-f112-4da8-8475-534064efe051" (UID: "711fbf65-f112-4da8-8475-534064efe051"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.158955 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/711fbf65-f112-4da8-8475-534064efe051-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.295803 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.478190 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zw54k"] Jan 27 14:50:17 crc kubenswrapper[4698]: E0127 14:50:17.478605 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="init" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.478618 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="init" Jan 27 14:50:17 crc kubenswrapper[4698]: E0127 14:50:17.478647 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="dnsmasq-dns" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.478653 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="dnsmasq-dns" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.478801 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="711fbf65-f112-4da8-8475-534064efe051" containerName="dnsmasq-dns" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.479328 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.490795 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-98a8-account-create-update-5nl5q"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.492125 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.493824 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.507165 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zw54k"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.515476 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-98a8-account-create-update-5nl5q"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.564807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rh6\" (UniqueName: \"kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.564847 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.565035 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.604405 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f2vqn"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.605663 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.628810 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f2vqn"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.647988 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f802-account-create-update-89d8r"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.661576 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.670609 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.670896 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rh6\" (UniqueName: \"kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.671060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wjb\" (UniqueName: \"kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.671187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.672886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.682738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f802-account-create-update-89d8r"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.697437 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.736505 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rh6\" (UniqueName: \"kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6\") pod \"barbican-db-create-zw54k\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.781935 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.782032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wjb\" (UniqueName: \"kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.782071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66kdn\" (UniqueName: \"kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.782135 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.782171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.782213 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx52s\" (UniqueName: \"kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.783078 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.804522 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.804939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" event={"ID":"711fbf65-f112-4da8-8475-534064efe051","Type":"ContainerDied","Data":"b3d94900930c077d1d4e8b4241dc9ec66f7fe1d63093f4752c0400af89e49ea7"} Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.805010 4698 scope.go:117] "RemoveContainer" containerID="63ca572b4b74f0f96d4538afa46128810ee1b526e454d7a6bd7f0a30c91927bd" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.805270 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f87864f5-svgk4" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.854851 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wjb\" (UniqueName: \"kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb\") pod \"barbican-98a8-account-create-update-5nl5q\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.887014 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.887094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66kdn\" (UniqueName: \"kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.887119 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.887151 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx52s\" (UniqueName: \"kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.888118 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.888582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.893984 4698 scope.go:117] "RemoveContainer" containerID="d638d8c72115155dcf1623d47e5baa258ebf8ca6c9c9cf4c0f14931d80eec58e" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.941277 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66kdn\" (UniqueName: \"kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn\") pod \"cinder-f802-account-create-update-89d8r\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.958334 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx52s\" (UniqueName: \"kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s\") pod \"cinder-db-create-f2vqn\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.961770 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9xlkv"] Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.975409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.979264 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.990170 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.991119 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nk52t" Jan 27 14:50:17 crc kubenswrapper[4698]: I0127 14:50:17.991674 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.000595 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.014580 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9xlkv"] Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.096975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.097476 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.097507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9v9\" (UniqueName: \"kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.098493 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.120073 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.152058 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f87864f5-svgk4"] Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.203709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.203760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9v9\" (UniqueName: \"kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.203853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.218444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.236506 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.242021 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.249396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9v9\" (UniqueName: \"kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9\") pod \"keystone-db-sync-9xlkv\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.378708 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.701112 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zw54k"] Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.820709 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zw54k" event={"ID":"00b5af26-92b2-461a-ad12-15c050aae00e","Type":"ContainerStarted","Data":"97b560a9a1584284e2d2d7a880d5ebaef9def6029c4b8a846d65fe5c692973ae"} Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.891782 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f2vqn"] Jan 27 14:50:18 crc kubenswrapper[4698]: I0127 14:50:18.913032 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-98a8-account-create-update-5nl5q"] Jan 27 14:50:18 crc kubenswrapper[4698]: W0127 14:50:18.920781 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7396dcad_4ef6_441e_bd4d_f04201b73baf.slice/crio-d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7 WatchSource:0}: Error finding container d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7: Status 404 returned error can't find the container with id d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7 Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.027269 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711fbf65-f112-4da8-8475-534064efe051" path="/var/lib/kubelet/pods/711fbf65-f112-4da8-8475-534064efe051/volumes" Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.028313 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f802-account-create-update-89d8r"] Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.087548 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9xlkv"] Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.830532 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f802-account-create-update-89d8r" event={"ID":"cf5234a1-c705-4a80-8992-05e2ce515ff6","Type":"ContainerStarted","Data":"265727278fbf945de722537b1404e08bbc0b9305440983e2cd7d930476a4c6f7"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.830572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f802-account-create-update-89d8r" event={"ID":"cf5234a1-c705-4a80-8992-05e2ce515ff6","Type":"ContainerStarted","Data":"bbfc768b8a04916d46d1641cb3d3abb3e0114525c152a182e7a7ac6ce6781edb"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.832962 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zw54k" event={"ID":"00b5af26-92b2-461a-ad12-15c050aae00e","Type":"ContainerStarted","Data":"b54878f3ebf22d757737dff657bec9044e8f4ca80e203a84ba7cf9857a970d5e"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.835125 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9xlkv" event={"ID":"4fa2454d-726a-4585-950a-336d57316b69","Type":"ContainerStarted","Data":"c541c87b3576a3fcec3cc6f175e6bd220cafe8467bec7b10ad376ffeb6262c90"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.836979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2vqn" event={"ID":"1ac257c8-4aeb-4926-99c2-52ea6d3093f6","Type":"ContainerStarted","Data":"9373c997f1b4ae436a2d2b8841f64218301be610c6cbf6e3b09e6dacba0a758f"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.837020 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2vqn" event={"ID":"1ac257c8-4aeb-4926-99c2-52ea6d3093f6","Type":"ContainerStarted","Data":"f5814fbd179b6e92134e3b1adb19dd378e850dacaaf7bc83fd9e6092c854e5bc"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.842123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-98a8-account-create-update-5nl5q" event={"ID":"7396dcad-4ef6-441e-bd4d-f04201b73baf","Type":"ContainerStarted","Data":"d8f9cbeda0f6b5608059748e28d347736e53d9b5f18b6db6db9cfaeff43441e9"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.842179 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-98a8-account-create-update-5nl5q" event={"ID":"7396dcad-4ef6-441e-bd4d-f04201b73baf","Type":"ContainerStarted","Data":"d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7"} Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.852692 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f802-account-create-update-89d8r" podStartSLOduration=2.852674429 podStartE2EDuration="2.852674429s" podCreationTimestamp="2026-01-27 14:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:19.850467211 +0000 UTC m=+1275.527244676" watchObservedRunningTime="2026-01-27 14:50:19.852674429 +0000 UTC m=+1275.529451894" Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.871850 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-zw54k" podStartSLOduration=2.871827064 podStartE2EDuration="2.871827064s" podCreationTimestamp="2026-01-27 14:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:19.863777791 +0000 UTC m=+1275.540555286" watchObservedRunningTime="2026-01-27 14:50:19.871827064 +0000 UTC m=+1275.548604529" Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.885184 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-f2vqn" podStartSLOduration=2.885158085 podStartE2EDuration="2.885158085s" podCreationTimestamp="2026-01-27 14:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:19.879120516 +0000 UTC m=+1275.555897991" watchObservedRunningTime="2026-01-27 14:50:19.885158085 +0000 UTC m=+1275.561935550" Jan 27 14:50:19 crc kubenswrapper[4698]: I0127 14:50:19.900034 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-98a8-account-create-update-5nl5q" podStartSLOduration=2.900013326 podStartE2EDuration="2.900013326s" podCreationTimestamp="2026-01-27 14:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:19.895731923 +0000 UTC m=+1275.572509408" watchObservedRunningTime="2026-01-27 14:50:19.900013326 +0000 UTC m=+1275.576790791" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.641284 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nnmcr"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.643150 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.661914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nnmcr"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.739799 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-gdttx"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.741466 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.748221 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-zb6wf" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.748461 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.752782 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gdttx"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.807188 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.807388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6r96\" (UniqueName: \"kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.861273 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zzlx8"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.862790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.873943 4698 generic.go:334] "Generic (PLEG): container finished" podID="00b5af26-92b2-461a-ad12-15c050aae00e" containerID="b54878f3ebf22d757737dff657bec9044e8f4ca80e203a84ba7cf9857a970d5e" exitCode=0 Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.874046 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zw54k" event={"ID":"00b5af26-92b2-461a-ad12-15c050aae00e","Type":"ContainerDied","Data":"b54878f3ebf22d757737dff657bec9044e8f4ca80e203a84ba7cf9857a970d5e"} Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.894021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zzlx8"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6r96\" (UniqueName: \"kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909818 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909844 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmsss\" (UniqueName: \"kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.909975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.911064 4698 generic.go:334] "Generic (PLEG): container finished" podID="1ac257c8-4aeb-4926-99c2-52ea6d3093f6" containerID="9373c997f1b4ae436a2d2b8841f64218301be610c6cbf6e3b09e6dacba0a758f" exitCode=0 Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.911112 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.911153 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2vqn" event={"ID":"1ac257c8-4aeb-4926-99c2-52ea6d3093f6","Type":"ContainerDied","Data":"9373c997f1b4ae436a2d2b8841f64218301be610c6cbf6e3b09e6dacba0a758f"} Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.917783 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5711-account-create-update-th2wp"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.919161 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.926728 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5711-account-create-update-th2wp"] Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.942049 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.949900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6r96\" (UniqueName: \"kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96\") pod \"glance-db-create-nnmcr\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:20 crc kubenswrapper[4698]: I0127 14:50:20.996221 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7z8d\" (UniqueName: \"kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011935 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.011959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c26\" (UniqueName: \"kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.012010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.012077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmsss\" (UniqueName: \"kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.034705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.035503 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.050444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmsss\" (UniqueName: \"kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.060836 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data\") pod \"watcher-db-sync-gdttx\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.064114 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gdttx" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.114723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.116241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c26\" (UniqueName: \"kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.116326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.116380 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7z8d\" (UniqueName: \"kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.117532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.124274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.154693 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c26\" (UniqueName: \"kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26\") pod \"glance-5711-account-create-update-th2wp\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.176618 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7z8d\" (UniqueName: \"kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d\") pod \"neutron-db-create-zzlx8\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.188702 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ba53-account-create-update-k5pcr"] Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.190711 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.199117 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.212711 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.227101 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba53-account-create-update-k5pcr"] Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.328116 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.329382 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmh6j\" (UniqueName: \"kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.329517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.432215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.432768 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmh6j\" (UniqueName: \"kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.434092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.543783 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmh6j\" (UniqueName: \"kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j\") pod \"neutron-ba53-account-create-update-k5pcr\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.599443 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.868705 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nnmcr"] Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.914264 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gdttx"] Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.937411 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nnmcr" event={"ID":"d082b51b-36d1-4c62-ad12-024337e68479","Type":"ContainerStarted","Data":"87026cc82ddec6551ae07c2c96b0a8d1ef8cca05b2e5c4807acf5c6df59461d0"} Jan 27 14:50:21 crc kubenswrapper[4698]: I0127 14:50:21.974021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zzlx8"] Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.103845 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba53-account-create-update-k5pcr"] Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.124402 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5711-account-create-update-th2wp"] Jan 27 14:50:22 crc kubenswrapper[4698]: W0127 14:50:22.131269 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8086d7c_021f_4bb7_892c_50f8b75d56a1.slice/crio-f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac WatchSource:0}: Error finding container f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac: Status 404 returned error can't find the container with id f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac Jan 27 14:50:22 crc kubenswrapper[4698]: W0127 14:50:22.196481 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod745bc9c9_c169_47c4_90aa_935671bc12f2.slice/crio-fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213 WatchSource:0}: Error finding container fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213: Status 404 returned error can't find the container with id fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213 Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.435359 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.460688 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.563718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rh6\" (UniqueName: \"kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6\") pod \"00b5af26-92b2-461a-ad12-15c050aae00e\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.563783 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx52s\" (UniqueName: \"kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s\") pod \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.563927 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts\") pod \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\" (UID: \"1ac257c8-4aeb-4926-99c2-52ea6d3093f6\") " Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.564016 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts\") pod \"00b5af26-92b2-461a-ad12-15c050aae00e\" (UID: \"00b5af26-92b2-461a-ad12-15c050aae00e\") " Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.565063 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00b5af26-92b2-461a-ad12-15c050aae00e" (UID: "00b5af26-92b2-461a-ad12-15c050aae00e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.566110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ac257c8-4aeb-4926-99c2-52ea6d3093f6" (UID: "1ac257c8-4aeb-4926-99c2-52ea6d3093f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.577326 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s" (OuterVolumeSpecName: "kube-api-access-wx52s") pod "1ac257c8-4aeb-4926-99c2-52ea6d3093f6" (UID: "1ac257c8-4aeb-4926-99c2-52ea6d3093f6"). InnerVolumeSpecName "kube-api-access-wx52s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.577847 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6" (OuterVolumeSpecName: "kube-api-access-88rh6") pod "00b5af26-92b2-461a-ad12-15c050aae00e" (UID: "00b5af26-92b2-461a-ad12-15c050aae00e"). InnerVolumeSpecName "kube-api-access-88rh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.666163 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.666209 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b5af26-92b2-461a-ad12-15c050aae00e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.666222 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rh6\" (UniqueName: \"kubernetes.io/projected/00b5af26-92b2-461a-ad12-15c050aae00e-kube-api-access-88rh6\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.666235 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx52s\" (UniqueName: \"kubernetes.io/projected/1ac257c8-4aeb-4926-99c2-52ea6d3093f6-kube-api-access-wx52s\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.950120 4698 generic.go:334] "Generic (PLEG): container finished" podID="d082b51b-36d1-4c62-ad12-024337e68479" containerID="f0993185e4733528b46053fd3aec6cf75218885544c1503dcb59755d086be71c" exitCode=0 Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.950499 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nnmcr" event={"ID":"d082b51b-36d1-4c62-ad12-024337e68479","Type":"ContainerDied","Data":"f0993185e4733528b46053fd3aec6cf75218885544c1503dcb59755d086be71c"} Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.954766 4698 generic.go:334] "Generic (PLEG): container finished" podID="cf5234a1-c705-4a80-8992-05e2ce515ff6" containerID="265727278fbf945de722537b1404e08bbc0b9305440983e2cd7d930476a4c6f7" exitCode=0 Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.954870 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f802-account-create-update-89d8r" event={"ID":"cf5234a1-c705-4a80-8992-05e2ce515ff6","Type":"ContainerDied","Data":"265727278fbf945de722537b1404e08bbc0b9305440983e2cd7d930476a4c6f7"} Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.959160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zw54k" event={"ID":"00b5af26-92b2-461a-ad12-15c050aae00e","Type":"ContainerDied","Data":"97b560a9a1584284e2d2d7a880d5ebaef9def6029c4b8a846d65fe5c692973ae"} Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.959215 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b560a9a1584284e2d2d7a880d5ebaef9def6029c4b8a846d65fe5c692973ae" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.959265 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zw54k" Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.973164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba53-account-create-update-k5pcr" event={"ID":"b8086d7c-021f-4bb7-892c-50f8b75d56a1","Type":"ContainerStarted","Data":"8bb4e0815d0ac4a096bc17fef2ba3050a1185b2b2599b4ae4defbea15992e34b"} Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.973218 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba53-account-create-update-k5pcr" event={"ID":"b8086d7c-021f-4bb7-892c-50f8b75d56a1","Type":"ContainerStarted","Data":"f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac"} Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.986522 4698 generic.go:334] "Generic (PLEG): container finished" podID="7396dcad-4ef6-441e-bd4d-f04201b73baf" containerID="d8f9cbeda0f6b5608059748e28d347736e53d9b5f18b6db6db9cfaeff43441e9" exitCode=0 Jan 27 14:50:22 crc kubenswrapper[4698]: I0127 14:50:22.986588 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-98a8-account-create-update-5nl5q" event={"ID":"7396dcad-4ef6-441e-bd4d-f04201b73baf","Type":"ContainerDied","Data":"d8f9cbeda0f6b5608059748e28d347736e53d9b5f18b6db6db9cfaeff43441e9"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.033691 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2vqn" Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.046466 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ba53-account-create-update-k5pcr" podStartSLOduration=2.046447636 podStartE2EDuration="2.046447636s" podCreationTimestamp="2026-01-27 14:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:23.044610018 +0000 UTC m=+1278.721387493" watchObservedRunningTime="2026-01-27 14:50:23.046447636 +0000 UTC m=+1278.723225101" Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.066906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zzlx8" event={"ID":"7685ed13-4e06-4052-a2e4-310e64a49a53","Type":"ContainerStarted","Data":"95ad9e08131819ba2bed237d8545041ac5a605b3e245659c2f0fa015b2d18f0e"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.066972 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zzlx8" event={"ID":"7685ed13-4e06-4052-a2e4-310e64a49a53","Type":"ContainerStarted","Data":"0ad6646e6f7c709694a067ed8331a36076ae0369ed075be62bda86e0b047358f"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.066986 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gdttx" event={"ID":"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5","Type":"ContainerStarted","Data":"0d74669aa2c2139f80f79071f2b0f84e6216389ca51b3f6f09848db1159affdc"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.066998 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2vqn" event={"ID":"1ac257c8-4aeb-4926-99c2-52ea6d3093f6","Type":"ContainerDied","Data":"f5814fbd179b6e92134e3b1adb19dd378e850dacaaf7bc83fd9e6092c854e5bc"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.067009 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5814fbd179b6e92134e3b1adb19dd378e850dacaaf7bc83fd9e6092c854e5bc" Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.067042 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5711-account-create-update-th2wp" event={"ID":"745bc9c9-c169-47c4-90aa-935671bc12f2","Type":"ContainerStarted","Data":"62cfa17a57cb4f2cc837346fc63009220328873da5795e9c32d0c6c18f79942c"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.067054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5711-account-create-update-th2wp" event={"ID":"745bc9c9-c169-47c4-90aa-935671bc12f2","Type":"ContainerStarted","Data":"fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213"} Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.102356 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-zzlx8" podStartSLOduration=3.102337868 podStartE2EDuration="3.102337868s" podCreationTimestamp="2026-01-27 14:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:23.064162423 +0000 UTC m=+1278.740939888" watchObservedRunningTime="2026-01-27 14:50:23.102337868 +0000 UTC m=+1278.779115333" Jan 27 14:50:23 crc kubenswrapper[4698]: I0127 14:50:23.129822 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5711-account-create-update-th2wp" podStartSLOduration=3.129805112 podStartE2EDuration="3.129805112s" podCreationTimestamp="2026-01-27 14:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:50:23.102517993 +0000 UTC m=+1278.779295468" watchObservedRunningTime="2026-01-27 14:50:23.129805112 +0000 UTC m=+1278.806582577" Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.060886 4698 generic.go:334] "Generic (PLEG): container finished" podID="745bc9c9-c169-47c4-90aa-935671bc12f2" containerID="62cfa17a57cb4f2cc837346fc63009220328873da5795e9c32d0c6c18f79942c" exitCode=0 Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.061219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5711-account-create-update-th2wp" event={"ID":"745bc9c9-c169-47c4-90aa-935671bc12f2","Type":"ContainerDied","Data":"62cfa17a57cb4f2cc837346fc63009220328873da5795e9c32d0c6c18f79942c"} Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.065504 4698 generic.go:334] "Generic (PLEG): container finished" podID="b8086d7c-021f-4bb7-892c-50f8b75d56a1" containerID="8bb4e0815d0ac4a096bc17fef2ba3050a1185b2b2599b4ae4defbea15992e34b" exitCode=0 Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.065539 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba53-account-create-update-k5pcr" event={"ID":"b8086d7c-021f-4bb7-892c-50f8b75d56a1","Type":"ContainerDied","Data":"8bb4e0815d0ac4a096bc17fef2ba3050a1185b2b2599b4ae4defbea15992e34b"} Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.067516 4698 generic.go:334] "Generic (PLEG): container finished" podID="7685ed13-4e06-4052-a2e4-310e64a49a53" containerID="95ad9e08131819ba2bed237d8545041ac5a605b3e245659c2f0fa015b2d18f0e" exitCode=0 Jan 27 14:50:24 crc kubenswrapper[4698]: I0127 14:50:24.067710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zzlx8" event={"ID":"7685ed13-4e06-4052-a2e4-310e64a49a53","Type":"ContainerDied","Data":"95ad9e08131819ba2bed237d8545041ac5a605b3e245659c2f0fa015b2d18f0e"} Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.880418 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.889987 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.905948 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.906975 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.931166 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.973471 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4c26\" (UniqueName: \"kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26\") pod \"745bc9c9-c169-47c4-90aa-935671bc12f2\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.973612 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts\") pod \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.973688 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmh6j\" (UniqueName: \"kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j\") pod \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\" (UID: \"b8086d7c-021f-4bb7-892c-50f8b75d56a1\") " Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.973754 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts\") pod \"745bc9c9-c169-47c4-90aa-935671bc12f2\" (UID: \"745bc9c9-c169-47c4-90aa-935671bc12f2\") " Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.974326 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8086d7c-021f-4bb7-892c-50f8b75d56a1" (UID: "b8086d7c-021f-4bb7-892c-50f8b75d56a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:26 crc kubenswrapper[4698]: I0127 14:50:26.974841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "745bc9c9-c169-47c4-90aa-935671bc12f2" (UID: "745bc9c9-c169-47c4-90aa-935671bc12f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.000119 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j" (OuterVolumeSpecName: "kube-api-access-cmh6j") pod "b8086d7c-021f-4bb7-892c-50f8b75d56a1" (UID: "b8086d7c-021f-4bb7-892c-50f8b75d56a1"). InnerVolumeSpecName "kube-api-access-cmh6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.000240 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26" (OuterVolumeSpecName: "kube-api-access-p4c26") pod "745bc9c9-c169-47c4-90aa-935671bc12f2" (UID: "745bc9c9-c169-47c4-90aa-935671bc12f2"). InnerVolumeSpecName "kube-api-access-p4c26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079192 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts\") pod \"7685ed13-4e06-4052-a2e4-310e64a49a53\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96wjb\" (UniqueName: \"kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb\") pod \"7396dcad-4ef6-441e-bd4d-f04201b73baf\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts\") pod \"7396dcad-4ef6-441e-bd4d-f04201b73baf\" (UID: \"7396dcad-4ef6-441e-bd4d-f04201b73baf\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079364 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7z8d\" (UniqueName: \"kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d\") pod \"7685ed13-4e06-4052-a2e4-310e64a49a53\" (UID: \"7685ed13-4e06-4052-a2e4-310e64a49a53\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079439 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6r96\" (UniqueName: \"kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96\") pod \"d082b51b-36d1-4c62-ad12-024337e68479\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts\") pod \"d082b51b-36d1-4c62-ad12-024337e68479\" (UID: \"d082b51b-36d1-4c62-ad12-024337e68479\") " Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079937 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745bc9c9-c169-47c4-90aa-935671bc12f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079956 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4c26\" (UniqueName: \"kubernetes.io/projected/745bc9c9-c169-47c4-90aa-935671bc12f2-kube-api-access-p4c26\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079967 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8086d7c-021f-4bb7-892c-50f8b75d56a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.079975 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmh6j\" (UniqueName: \"kubernetes.io/projected/b8086d7c-021f-4bb7-892c-50f8b75d56a1-kube-api-access-cmh6j\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.080822 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7685ed13-4e06-4052-a2e4-310e64a49a53" (UID: "7685ed13-4e06-4052-a2e4-310e64a49a53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.082190 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7396dcad-4ef6-441e-bd4d-f04201b73baf" (UID: "7396dcad-4ef6-441e-bd4d-f04201b73baf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.082683 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d082b51b-36d1-4c62-ad12-024337e68479" (UID: "d082b51b-36d1-4c62-ad12-024337e68479"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.095017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb" (OuterVolumeSpecName: "kube-api-access-96wjb") pod "7396dcad-4ef6-441e-bd4d-f04201b73baf" (UID: "7396dcad-4ef6-441e-bd4d-f04201b73baf"). InnerVolumeSpecName "kube-api-access-96wjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.097964 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96" (OuterVolumeSpecName: "kube-api-access-x6r96") pod "d082b51b-36d1-4c62-ad12-024337e68479" (UID: "d082b51b-36d1-4c62-ad12-024337e68479"). InnerVolumeSpecName "kube-api-access-x6r96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.115053 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-98a8-account-create-update-5nl5q" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.123815 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba53-account-create-update-k5pcr" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.140742 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d" (OuterVolumeSpecName: "kube-api-access-r7z8d") pod "7685ed13-4e06-4052-a2e4-310e64a49a53" (UID: "7685ed13-4e06-4052-a2e4-310e64a49a53"). InnerVolumeSpecName "kube-api-access-r7z8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.147465 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nnmcr" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.166780 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zzlx8" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.179033 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5711-account-create-update-th2wp" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181581 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7396dcad-4ef6-441e-bd4d-f04201b73baf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181602 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7z8d\" (UniqueName: \"kubernetes.io/projected/7685ed13-4e06-4052-a2e4-310e64a49a53-kube-api-access-r7z8d\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181617 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6r96\" (UniqueName: \"kubernetes.io/projected/d082b51b-36d1-4c62-ad12-024337e68479-kube-api-access-x6r96\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181630 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d082b51b-36d1-4c62-ad12-024337e68479-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181662 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7685ed13-4e06-4052-a2e4-310e64a49a53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.181677 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96wjb\" (UniqueName: \"kubernetes.io/projected/7396dcad-4ef6-441e-bd4d-f04201b73baf-kube-api-access-96wjb\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-98a8-account-create-update-5nl5q" event={"ID":"7396dcad-4ef6-441e-bd4d-f04201b73baf","Type":"ContainerDied","Data":"d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7"} Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190874 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d708987fcf852716039e11e3bcc413e6d881ba1bbc389e5a261cc5ee876f8ac7" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190890 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba53-account-create-update-k5pcr" event={"ID":"b8086d7c-021f-4bb7-892c-50f8b75d56a1","Type":"ContainerDied","Data":"f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac"} Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190904 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91e67efab5a79d7653da4d3432beaac285188e5d404b2dac900ab2c0a4811ac" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190918 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nnmcr" event={"ID":"d082b51b-36d1-4c62-ad12-024337e68479","Type":"ContainerDied","Data":"87026cc82ddec6551ae07c2c96b0a8d1ef8cca05b2e5c4807acf5c6df59461d0"} Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190931 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87026cc82ddec6551ae07c2c96b0a8d1ef8cca05b2e5c4807acf5c6df59461d0" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zzlx8" event={"ID":"7685ed13-4e06-4052-a2e4-310e64a49a53","Type":"ContainerDied","Data":"0ad6646e6f7c709694a067ed8331a36076ae0369ed075be62bda86e0b047358f"} Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190957 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad6646e6f7c709694a067ed8331a36076ae0369ed075be62bda86e0b047358f" Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190969 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5711-account-create-update-th2wp" event={"ID":"745bc9c9-c169-47c4-90aa-935671bc12f2","Type":"ContainerDied","Data":"fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213"} Jan 27 14:50:27 crc kubenswrapper[4698]: I0127 14:50:27.190982 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9cf6a92222350a866a11864ef9dbd6a3e1cdf54320c64ba4e31aa1e24c1213" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.137394 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6z2gn"] Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138333 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d082b51b-36d1-4c62-ad12-024337e68479" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138355 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d082b51b-36d1-4c62-ad12-024337e68479" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138378 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7396dcad-4ef6-441e-bd4d-f04201b73baf" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138387 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7396dcad-4ef6-441e-bd4d-f04201b73baf" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138402 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8086d7c-021f-4bb7-892c-50f8b75d56a1" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8086d7c-021f-4bb7-892c-50f8b75d56a1" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138422 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745bc9c9-c169-47c4-90aa-935671bc12f2" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138429 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="745bc9c9-c169-47c4-90aa-935671bc12f2" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138447 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7685ed13-4e06-4052-a2e4-310e64a49a53" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138455 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7685ed13-4e06-4052-a2e4-310e64a49a53" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138479 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac257c8-4aeb-4926-99c2-52ea6d3093f6" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138488 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac257c8-4aeb-4926-99c2-52ea6d3093f6" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: E0127 14:50:31.138504 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b5af26-92b2-461a-ad12-15c050aae00e" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138512 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b5af26-92b2-461a-ad12-15c050aae00e" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138731 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b5af26-92b2-461a-ad12-15c050aae00e" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138769 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8086d7c-021f-4bb7-892c-50f8b75d56a1" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138780 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac257c8-4aeb-4926-99c2-52ea6d3093f6" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138796 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d082b51b-36d1-4c62-ad12-024337e68479" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138820 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7396dcad-4ef6-441e-bd4d-f04201b73baf" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138838 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7685ed13-4e06-4052-a2e4-310e64a49a53" containerName="mariadb-database-create" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.138854 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="745bc9c9-c169-47c4-90aa-935671bc12f2" containerName="mariadb-account-create-update" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.139563 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.142583 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qwq5p" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.145768 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.157806 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6z2gn"] Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.267110 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.267301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.267345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.267408 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxfgp\" (UniqueName: \"kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.369480 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.369575 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.369600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.369632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxfgp\" (UniqueName: \"kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.385932 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.386006 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxfgp\" (UniqueName: \"kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.387883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.402727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data\") pod \"glance-db-sync-6z2gn\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:31 crc kubenswrapper[4698]: I0127 14:50:31.465184 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6z2gn" Jan 27 14:50:35 crc kubenswrapper[4698]: I0127 14:50:35.989344 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.153655 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66kdn\" (UniqueName: \"kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn\") pod \"cf5234a1-c705-4a80-8992-05e2ce515ff6\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.153873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts\") pod \"cf5234a1-c705-4a80-8992-05e2ce515ff6\" (UID: \"cf5234a1-c705-4a80-8992-05e2ce515ff6\") " Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.154769 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf5234a1-c705-4a80-8992-05e2ce515ff6" (UID: "cf5234a1-c705-4a80-8992-05e2ce515ff6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.167252 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn" (OuterVolumeSpecName: "kube-api-access-66kdn") pod "cf5234a1-c705-4a80-8992-05e2ce515ff6" (UID: "cf5234a1-c705-4a80-8992-05e2ce515ff6"). InnerVolumeSpecName "kube-api-access-66kdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.255782 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5234a1-c705-4a80-8992-05e2ce515ff6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.255831 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66kdn\" (UniqueName: \"kubernetes.io/projected/cf5234a1-c705-4a80-8992-05e2ce515ff6-kube-api-access-66kdn\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.285228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f802-account-create-update-89d8r" event={"ID":"cf5234a1-c705-4a80-8992-05e2ce515ff6","Type":"ContainerDied","Data":"bbfc768b8a04916d46d1641cb3d3abb3e0114525c152a182e7a7ac6ce6781edb"} Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.285281 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbfc768b8a04916d46d1641cb3d3abb3e0114525c152a182e7a7ac6ce6781edb" Jan 27 14:50:36 crc kubenswrapper[4698]: I0127 14:50:36.285284 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f802-account-create-update-89d8r" Jan 27 14:50:36 crc kubenswrapper[4698]: E0127 14:50:36.511803 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 27 14:50:36 crc kubenswrapper[4698]: E0127 14:50:36.511876 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 27 14:50:36 crc kubenswrapper[4698]: E0127 14:50:36.512066 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.111:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmsss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-gdttx_openstack(d5b86bc8-7f21-4d28-a94d-56ec54d13cb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:50:36 crc kubenswrapper[4698]: E0127 14:50:36.513486 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-gdttx" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" Jan 27 14:50:37 crc kubenswrapper[4698]: I0127 14:50:37.211888 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6z2gn"] Jan 27 14:50:37 crc kubenswrapper[4698]: I0127 14:50:37.293881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6z2gn" event={"ID":"b202d484-189a-4722-93b1-f72348e74aa4","Type":"ContainerStarted","Data":"b77d301a13f41fbe10f129be11b4da596e99b9ae56edd1ecff780f46137fdb71"} Jan 27 14:50:37 crc kubenswrapper[4698]: I0127 14:50:37.295444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9xlkv" event={"ID":"4fa2454d-726a-4585-950a-336d57316b69","Type":"ContainerStarted","Data":"e55d3639a211f75e698d7b97ceebb2a57f072d956855cacea9fcc44682cc161f"} Jan 27 14:50:37 crc kubenswrapper[4698]: E0127 14:50:37.296777 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-gdttx" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" Jan 27 14:50:37 crc kubenswrapper[4698]: I0127 14:50:37.329373 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9xlkv" podStartSLOduration=2.913614085 podStartE2EDuration="20.329353124s" podCreationTimestamp="2026-01-27 14:50:17 +0000 UTC" firstStartedPulling="2026-01-27 14:50:19.1205602 +0000 UTC m=+1274.797337665" lastFinishedPulling="2026-01-27 14:50:36.536299239 +0000 UTC m=+1292.213076704" observedRunningTime="2026-01-27 14:50:37.326979412 +0000 UTC m=+1293.003756897" watchObservedRunningTime="2026-01-27 14:50:37.329353124 +0000 UTC m=+1293.006130589" Jan 27 14:50:54 crc kubenswrapper[4698]: E0127 14:50:54.315476 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 27 14:50:54 crc kubenswrapper[4698]: E0127 14:50:54.316022 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 27 14:50:54 crc kubenswrapper[4698]: E0127 14:50:54.316317 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.111:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxfgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6z2gn_openstack(b202d484-189a-4722-93b1-f72348e74aa4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:50:54 crc kubenswrapper[4698]: E0127 14:50:54.317572 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6z2gn" podUID="b202d484-189a-4722-93b1-f72348e74aa4" Jan 27 14:50:54 crc kubenswrapper[4698]: E0127 14:50:54.978207 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-6z2gn" podUID="b202d484-189a-4722-93b1-f72348e74aa4" Jan 27 14:50:55 crc kubenswrapper[4698]: I0127 14:50:55.469482 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gdttx" event={"ID":"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5","Type":"ContainerStarted","Data":"7422e81a51ede7677b83e0c9fcb0181870c097b8526d9aba0e1d876ef6dc7e05"} Jan 27 14:50:55 crc kubenswrapper[4698]: I0127 14:50:55.488798 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-gdttx" podStartSLOduration=2.428460442 podStartE2EDuration="35.488781149s" podCreationTimestamp="2026-01-27 14:50:20 +0000 UTC" firstStartedPulling="2026-01-27 14:50:22.001294843 +0000 UTC m=+1277.678072308" lastFinishedPulling="2026-01-27 14:50:55.06161555 +0000 UTC m=+1310.738393015" observedRunningTime="2026-01-27 14:50:55.485009749 +0000 UTC m=+1311.161787234" watchObservedRunningTime="2026-01-27 14:50:55.488781149 +0000 UTC m=+1311.165558614" Jan 27 14:51:07 crc kubenswrapper[4698]: I0127 14:51:07.561413 4698 generic.go:334] "Generic (PLEG): container finished" podID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" containerID="7422e81a51ede7677b83e0c9fcb0181870c097b8526d9aba0e1d876ef6dc7e05" exitCode=0 Jan 27 14:51:07 crc kubenswrapper[4698]: I0127 14:51:07.561487 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gdttx" event={"ID":"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5","Type":"ContainerDied","Data":"7422e81a51ede7677b83e0c9fcb0181870c097b8526d9aba0e1d876ef6dc7e05"} Jan 27 14:51:07 crc kubenswrapper[4698]: I0127 14:51:07.564522 4698 generic.go:334] "Generic (PLEG): container finished" podID="4fa2454d-726a-4585-950a-336d57316b69" containerID="e55d3639a211f75e698d7b97ceebb2a57f072d956855cacea9fcc44682cc161f" exitCode=0 Jan 27 14:51:07 crc kubenswrapper[4698]: I0127 14:51:07.564562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9xlkv" event={"ID":"4fa2454d-726a-4585-950a-336d57316b69","Type":"ContainerDied","Data":"e55d3639a211f75e698d7b97ceebb2a57f072d956855cacea9fcc44682cc161f"} Jan 27 14:51:08 crc kubenswrapper[4698]: I0127 14:51:08.984821 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:51:08 crc kubenswrapper[4698]: I0127 14:51:08.990824 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gdttx" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t9v9\" (UniqueName: \"kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9\") pod \"4fa2454d-726a-4585-950a-336d57316b69\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data\") pod \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154576 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmsss\" (UniqueName: \"kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss\") pod \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154706 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data\") pod \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154735 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle\") pod \"4fa2454d-726a-4585-950a-336d57316b69\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data\") pod \"4fa2454d-726a-4585-950a-336d57316b69\" (UID: \"4fa2454d-726a-4585-950a-336d57316b69\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.154890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle\") pod \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\" (UID: \"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5\") " Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.158540 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss" (OuterVolumeSpecName: "kube-api-access-zmsss") pod "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" (UID: "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5"). InnerVolumeSpecName "kube-api-access-zmsss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.160252 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9" (OuterVolumeSpecName: "kube-api-access-9t9v9") pod "4fa2454d-726a-4585-950a-336d57316b69" (UID: "4fa2454d-726a-4585-950a-336d57316b69"). InnerVolumeSpecName "kube-api-access-9t9v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.160837 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" (UID: "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.181005 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fa2454d-726a-4585-950a-336d57316b69" (UID: "4fa2454d-726a-4585-950a-336d57316b69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.184184 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" (UID: "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.203103 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data" (OuterVolumeSpecName: "config-data") pod "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" (UID: "d5b86bc8-7f21-4d28-a94d-56ec54d13cb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.214440 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data" (OuterVolumeSpecName: "config-data") pod "4fa2454d-726a-4585-950a-336d57316b69" (UID: "4fa2454d-726a-4585-950a-336d57316b69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256530 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256578 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256589 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa2454d-726a-4585-950a-336d57316b69-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256597 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256607 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t9v9\" (UniqueName: \"kubernetes.io/projected/4fa2454d-726a-4585-950a-336d57316b69-kube-api-access-9t9v9\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256617 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.256629 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmsss\" (UniqueName: \"kubernetes.io/projected/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5-kube-api-access-zmsss\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.583490 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gdttx" event={"ID":"d5b86bc8-7f21-4d28-a94d-56ec54d13cb5","Type":"ContainerDied","Data":"0d74669aa2c2139f80f79071f2b0f84e6216389ca51b3f6f09848db1159affdc"} Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.583544 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d74669aa2c2139f80f79071f2b0f84e6216389ca51b3f6f09848db1159affdc" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.583517 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gdttx" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.586164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9xlkv" event={"ID":"4fa2454d-726a-4585-950a-336d57316b69","Type":"ContainerDied","Data":"c541c87b3576a3fcec3cc6f175e6bd220cafe8467bec7b10ad376ffeb6262c90"} Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.586236 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9xlkv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.586249 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c541c87b3576a3fcec3cc6f175e6bd220cafe8467bec7b10ad376ffeb6262c90" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.883991 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nwldv"] Jan 27 14:51:09 crc kubenswrapper[4698]: E0127 14:51:09.884728 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" containerName="watcher-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.884751 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" containerName="watcher-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: E0127 14:51:09.884772 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa2454d-726a-4585-950a-336d57316b69" containerName="keystone-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.884780 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa2454d-726a-4585-950a-336d57316b69" containerName="keystone-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: E0127 14:51:09.884798 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5234a1-c705-4a80-8992-05e2ce515ff6" containerName="mariadb-account-create-update" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.884806 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5234a1-c705-4a80-8992-05e2ce515ff6" containerName="mariadb-account-create-update" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.885037 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" containerName="watcher-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.885065 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5234a1-c705-4a80-8992-05e2ce515ff6" containerName="mariadb-account-create-update" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.885078 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa2454d-726a-4585-950a-336d57316b69" containerName="keystone-db-sync" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.885720 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.895058 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.896616 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nk52t" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.899758 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.899787 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.911763 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.913155 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.922163 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.932394 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nwldv"] Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.946078 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971654 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971753 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmj5t\" (UniqueName: \"kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971869 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:09 crc kubenswrapper[4698]: I0127 14:51:09.971891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.062539 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.072362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075546 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmj5t\" (UniqueName: \"kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075605 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075692 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075721 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2p2p\" (UniqueName: \"kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075919 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.075967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.076003 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.092930 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.096656 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.096907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.101100 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-zb6wf" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.101298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.101586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.104516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.120763 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.126834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmj5t\" (UniqueName: \"kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t\") pod \"keystone-bootstrap-nwldv\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.131229 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.132333 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.143706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177384 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177513 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2p2p\" (UniqueName: \"kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m5t\" (UniqueName: \"kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.177673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.178756 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.179471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.182883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.183123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.183508 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.188487 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.190434 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.198846 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.198903 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tmjzf" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.199116 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.199179 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.210673 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.222469 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.242009 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.249587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2p2p\" (UniqueName: \"kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p\") pod \"dnsmasq-dns-96c5cb5f9-jgbs6\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.254702 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vp87x"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.267436 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.270153 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qqt82" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279582 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m5t\" (UniqueName: \"kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279726 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279794 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279817 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q25c\" (UniqueName: \"kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.279972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.280011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvlk\" (UniqueName: \"kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.280078 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.280579 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.287700 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.294118 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.301707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.302450 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.320099 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mcnmn"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.322511 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.325594 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.329423 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zqtt2" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.338225 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.342446 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.363736 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m5t\" (UniqueName: \"kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t\") pod \"watcher-api-0\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.376502 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vp87x"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q25c\" (UniqueName: \"kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381291 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381329 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvlk\" (UniqueName: \"kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381462 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92x6\" (UniqueName: \"kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381528 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.381599 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.385097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.385935 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.387780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.399665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.406630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.414033 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.414392 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.434355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q25c\" (UniqueName: \"kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c\") pod \"watcher-applier-0\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.437124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvlk\" (UniqueName: \"kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk\") pod \"horizon-75cdd6b9b5-7lj2z\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.459618 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mcnmn"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482796 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpmd\" (UniqueName: \"kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482844 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482885 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482941 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92x6\" (UniqueName: \"kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.482999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.483014 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.483045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.483061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.493857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.494959 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.503965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.517312 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.520294 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92x6\" (UniqueName: \"kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6\") pod \"neutron-db-sync-vp87x\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.536928 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.544369 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.550577 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.569124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.569486 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.584273 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.584335 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcpmd\" (UniqueName: \"kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.584373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.584401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.625836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.625967 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626232 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626257 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626333 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626359 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.626423 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4t7x\" (UniqueName: \"kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.627769 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.631200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.648980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.663364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.668195 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.701712 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733371 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733812 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.733949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4t7x\" (UniqueName: \"kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.737862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.738028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.738175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.738283 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbvlv\" (UniqueName: \"kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.738346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.738432 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.741330 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.742119 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.750077 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.754559 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.759771 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9jhwb"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.762930 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.766878 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vp87x" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.774336 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xwvcj" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.775364 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.778329 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4t7x\" (UniqueName: \"kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.851356 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.851429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.851497 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcpmd\" (UniqueName: \"kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.853562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.888896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data\") pod \"ceilometer-0\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.892761 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbvlv\" (UniqueName: \"kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.892814 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhskr\" (UniqueName: \"kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.892883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.893009 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.893043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.893059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.893086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.893125 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.899761 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.900431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data\") pod \"cinder-db-sync-mcnmn\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.914052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.921433 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6z2gn" event={"ID":"b202d484-189a-4722-93b1-f72348e74aa4","Type":"ContainerStarted","Data":"ff17803b8805dd7d8fe5951bb07bcb464516773fae7756b0f504bda3b2b5f3b0"} Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.925489 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.927942 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.931414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.945285 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.951134 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.965545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbvlv\" (UniqueName: \"kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv\") pod \"watcher-decision-engine-0\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:51:10 crc kubenswrapper[4698]: I0127 14:51:10.984313 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9jhwb"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.003243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.003532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhskr\" (UniqueName: \"kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.003685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.010302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.027955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.065156 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.075283 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhskr\" (UniqueName: \"kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr\") pod \"barbican-db-sync-9jhwb\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.086890 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.087129 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s4fks"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.090448 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.092214 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vpn8j" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.092703 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.093086 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.096218 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s4fks"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.106601 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.106905 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.108934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.132327 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.133947 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.147028 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.162813 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.197436 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6z2gn" podStartSLOduration=8.339915022 podStartE2EDuration="40.197414472s" podCreationTimestamp="2026-01-27 14:50:31 +0000 UTC" firstStartedPulling="2026-01-27 14:50:37.227473862 +0000 UTC m=+1292.904251327" lastFinishedPulling="2026-01-27 14:51:09.084973312 +0000 UTC m=+1324.761750777" observedRunningTime="2026-01-27 14:51:10.953036977 +0000 UTC m=+1326.629814462" watchObservedRunningTime="2026-01-27 14:51:11.197414472 +0000 UTC m=+1326.874191927" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.209739 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210042 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210178 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjxq\" (UniqueName: \"kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210430 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pdpx\" (UniqueName: \"kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.210767 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.211312 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.211546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.211698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.211965 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.284813 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318221 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318313 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318372 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318418 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318437 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318698 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggjxq\" (UniqueName: \"kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318776 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjkp\" (UniqueName: \"kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318859 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pdpx\" (UniqueName: \"kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.318912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.320772 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.321243 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.321787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.321915 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.322052 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.322204 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.325127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.342722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.359487 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pdpx\" (UniqueName: \"kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.369101 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts\") pod \"placement-db-sync-s4fks\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.379336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggjxq\" (UniqueName: \"kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq\") pod \"dnsmasq-dns-7b755cc99f-wsqq5\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.421046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.421095 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.421192 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.421219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.421254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjkp\" (UniqueName: \"kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.422713 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.423374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.423700 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.432057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.439601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s4fks" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.447409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjkp\" (UniqueName: \"kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp\") pod \"horizon-56b6f76549-v2fjv\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.454489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.469164 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.534251 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nwldv"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.918506 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.954898 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:11 crc kubenswrapper[4698]: I0127 14:51:11.958800 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwldv" event={"ID":"69ce5b4f-ff07-4f6c-8c15-bef96f1be728","Type":"ContainerStarted","Data":"ec429d15ee79b280d5a252f7afc0d313ce8fed71e62794c89b2ed3a8680d3039"} Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.098025 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vp87x"] Jan 27 14:51:12 crc kubenswrapper[4698]: W0127 14:51:12.109841 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992034d3_1c4d_4e83_9641_12543dd3df24.slice/crio-55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c WatchSource:0}: Error finding container 55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c: Status 404 returned error can't find the container with id 55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.143736 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:12 crc kubenswrapper[4698]: W0127 14:51:12.144973 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ae30c47_a75e_4b9d_a524_547ad93bd32d.slice/crio-06fd88a7b58924837553bca4e19092abf493a2f7a87ffef3b711b15295d59467 WatchSource:0}: Error finding container 06fd88a7b58924837553bca4e19092abf493a2f7a87ffef3b711b15295d59467: Status 404 returned error can't find the container with id 06fd88a7b58924837553bca4e19092abf493a2f7a87ffef3b711b15295d59467 Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.157065 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.534074 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.545371 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9jhwb"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.551826 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s4fks"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.580458 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mcnmn"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.594562 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.655702 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:12 crc kubenswrapper[4698]: I0127 14:51:12.667127 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.020484 4698 generic.go:334] "Generic (PLEG): container finished" podID="1ae30c47-a75e-4b9d-a524-547ad93bd32d" containerID="f31c079eeeb28ad036bdeafa4e2e5da90099e2c2c046f3b3583f57fc4532233f" exitCode=0 Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.056090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" event={"ID":"1ae30c47-a75e-4b9d-a524-547ad93bd32d","Type":"ContainerDied","Data":"f31c079eeeb28ad036bdeafa4e2e5da90099e2c2c046f3b3583f57fc4532233f"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.056151 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" event={"ID":"1ae30c47-a75e-4b9d-a524-547ad93bd32d","Type":"ContainerStarted","Data":"06fd88a7b58924837553bca4e19092abf493a2f7a87ffef3b711b15295d59467"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.056168 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.064530 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.101786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerStarted","Data":"19c4a3586c07df0a5794b89430cb466ba99d57ff7b1eaa9d97f480b134b64c03"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.101840 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerStarted","Data":"ae0c1629d9f9b853b835305af0cd826781af79ee6d7c6e3c2ebdacecd2238a97"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.101854 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerStarted","Data":"93044713432b1070cf4616ffcd8070c09c26bd59c950766b305510987b28fa59"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.103774 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.107723 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.115889 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c198be7c-95a9-47ea-80fd-252e5d8d9ac9","Type":"ContainerStarted","Data":"9949729f6f51087c1dae0d7a0e0a63a5f2f5f12d1834f8685a3963bdd9cff3ea"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.123775 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.141058 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.142911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.145051 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9jhwb" event={"ID":"51ba2ef6-17ab-4974-a2c6-7f995343e24b","Type":"ContainerStarted","Data":"b72dee7fa8d69e01713b8e64e9d435b102e9c2dbf569c400a4dd451d8420eb62"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.153699 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.156575 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.156561657 podStartE2EDuration="3.156561657s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:51:13.15515638 +0000 UTC m=+1328.831933865" watchObservedRunningTime="2026-01-27 14:51:13.156561657 +0000 UTC m=+1328.833339122" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.168312 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cdd6b9b5-7lj2z" event={"ID":"4e0c2380-35df-4719-8881-546b69c6225a","Type":"ContainerStarted","Data":"efbd7f9e56ed7135364acc98cd66cc050d95c697bf4ea877b79cec1304b08c10"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.195201 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwldv" event={"ID":"69ce5b4f-ff07-4f6c-8c15-bef96f1be728","Type":"ContainerStarted","Data":"87187a5a2296882e19cec5a45ad68dc3000dfce0034be1365dd20f36574d0e1f"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.211936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerStarted","Data":"7b65f87dabef4b0e82ba6792ba6bad3f280136dd11fec634e34eb7578fac132c"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.213923 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"445b01d2-0375-432b-808d-4045eb66c5da","Type":"ContainerStarted","Data":"841222290a6ca76eac4d640c3e716440677664559bb3ffaff83d6396bf871a3d"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.224595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcnmn" event={"ID":"74946770-13e5-4777-a645-bb6bee73c277","Type":"ContainerStarted","Data":"f9d55be61f3884eba26ac43ba06a8b6c8ce4de7a43ae7eb87d1b0e1850cc4feb"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.234589 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nwldv" podStartSLOduration=4.23456346 podStartE2EDuration="4.23456346s" podCreationTimestamp="2026-01-27 14:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:51:13.222816251 +0000 UTC m=+1328.899593746" watchObservedRunningTime="2026-01-27 14:51:13.23456346 +0000 UTC m=+1328.911340925" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.251564 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" event={"ID":"705a412d-9e58-46df-8084-cb719c4c5b57","Type":"ContainerStarted","Data":"52f01d29426e07d377a81c4938b3e4f6ffb5d925af6e7135718691d4c10f5d1a"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.269535 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s4fks" event={"ID":"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe","Type":"ContainerStarted","Data":"6345d311ff16ce6000a6bb77ba4981b402f2d944d3e6417f4b68c67386614771"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.314041 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.314222 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.314366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzx6x\" (UniqueName: \"kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.314402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.314495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.321166 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b6f76549-v2fjv" event={"ID":"d531ab8d-5ac9-4a51-8044-c68a217e1843","Type":"ContainerStarted","Data":"dc57cdab5c42177ac31a2a0cf472e20708241594805be2beaeed904c484d1a44"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.333368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vp87x" event={"ID":"992034d3-1c4d-4e83-9641-12543dd3df24","Type":"ContainerStarted","Data":"123d4d06a0f8addc043f78310758be0fb0de464dcf972f4437ef480c85eff7a4"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.333918 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vp87x" event={"ID":"992034d3-1c4d-4e83-9641-12543dd3df24","Type":"ContainerStarted","Data":"55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c"} Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.434456 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzx6x\" (UniqueName: \"kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.436701 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.437062 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.437187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.437320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.444208 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.444458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.445590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.460967 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.465851 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzx6x\" (UniqueName: \"kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x\") pod \"horizon-76b57fc957-f9qxf\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.478939 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vp87x" podStartSLOduration=3.478915676 podStartE2EDuration="3.478915676s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:51:13.365052247 +0000 UTC m=+1329.041829732" watchObservedRunningTime="2026-01-27 14:51:13.478915676 +0000 UTC m=+1329.155693141" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.482974 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.688203 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.844263 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.844578 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.844750 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.844791 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.845870 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2p2p\" (UniqueName: \"kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.846191 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config\") pod \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\" (UID: \"1ae30c47-a75e-4b9d-a524-547ad93bd32d\") " Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.856706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p" (OuterVolumeSpecName: "kube-api-access-n2p2p") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "kube-api-access-n2p2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.879949 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config" (OuterVolumeSpecName: "config") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.910460 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.922568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.928061 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.928393 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ae30c47-a75e-4b9d-a524-547ad93bd32d" (UID: "1ae30c47-a75e-4b9d-a524-547ad93bd32d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949052 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949089 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949101 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949110 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949119 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae30c47-a75e-4b9d-a524-547ad93bd32d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:13 crc kubenswrapper[4698]: I0127 14:51:13.949130 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2p2p\" (UniqueName: \"kubernetes.io/projected/1ae30c47-a75e-4b9d-a524-547ad93bd32d-kube-api-access-n2p2p\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.112007 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.360214 4698 generic.go:334] "Generic (PLEG): container finished" podID="705a412d-9e58-46df-8084-cb719c4c5b57" containerID="842f5fc1fa155d5f971e7ba4f5aea68d2480f9ad3ab31cc84bae9534373fabfe" exitCode=0 Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.360289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" event={"ID":"705a412d-9e58-46df-8084-cb719c4c5b57","Type":"ContainerStarted","Data":"bfa7799b273e8188d1172c651671df1f5d51924a2d8d478434b2876fc6b90604"} Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.360655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" event={"ID":"705a412d-9e58-46df-8084-cb719c4c5b57","Type":"ContainerDied","Data":"842f5fc1fa155d5f971e7ba4f5aea68d2480f9ad3ab31cc84bae9534373fabfe"} Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.360879 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.365605 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.365693 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-96c5cb5f9-jgbs6" event={"ID":"1ae30c47-a75e-4b9d-a524-547ad93bd32d","Type":"ContainerDied","Data":"06fd88a7b58924837553bca4e19092abf493a2f7a87ffef3b711b15295d59467"} Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.365778 4698 scope.go:117] "RemoveContainer" containerID="f31c079eeeb28ad036bdeafa4e2e5da90099e2c2c046f3b3583f57fc4532233f" Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.372258 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76b57fc957-f9qxf" event={"ID":"732c4db7-cf20-4516-8ddf-40c801a8cf48","Type":"ContainerStarted","Data":"47a95f13c98199b6061c8d601ea1d7d4451e12d384aed676f385cec63d01e259"} Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.372916 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api-log" containerID="cri-o://ae0c1629d9f9b853b835305af0cd826781af79ee6d7c6e3c2ebdacecd2238a97" gracePeriod=30 Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.373035 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" containerID="cri-o://19c4a3586c07df0a5794b89430cb466ba99d57ff7b1eaa9d97f480b134b64c03" gracePeriod=30 Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.384611 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" podStartSLOduration=4.384591907 podStartE2EDuration="4.384591907s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:51:14.379057471 +0000 UTC m=+1330.055834936" watchObservedRunningTime="2026-01-27 14:51:14.384591907 +0000 UTC m=+1330.061369372" Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.496061 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:14 crc kubenswrapper[4698]: I0127 14:51:14.509497 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-96c5cb5f9-jgbs6"] Jan 27 14:51:15 crc kubenswrapper[4698]: I0127 14:51:15.012295 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae30c47-a75e-4b9d-a524-547ad93bd32d" path="/var/lib/kubelet/pods/1ae30c47-a75e-4b9d-a524-547ad93bd32d/volumes" Jan 27 14:51:15 crc kubenswrapper[4698]: I0127 14:51:15.419956 4698 generic.go:334] "Generic (PLEG): container finished" podID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerID="ae0c1629d9f9b853b835305af0cd826781af79ee6d7c6e3c2ebdacecd2238a97" exitCode=143 Jan 27 14:51:15 crc kubenswrapper[4698]: I0127 14:51:15.420064 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerDied","Data":"ae0c1629d9f9b853b835305af0cd826781af79ee6d7c6e3c2ebdacecd2238a97"} Jan 27 14:51:15 crc kubenswrapper[4698]: I0127 14:51:15.499780 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:51:17 crc kubenswrapper[4698]: I0127 14:51:17.842368 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:51:18 crc kubenswrapper[4698]: I0127 14:51:18.960442 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.014852 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:51:19 crc kubenswrapper[4698]: E0127 14:51:19.015353 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae30c47-a75e-4b9d-a524-547ad93bd32d" containerName="init" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.015376 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae30c47-a75e-4b9d-a524-547ad93bd32d" containerName="init" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.015594 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae30c47-a75e-4b9d-a524-547ad93bd32d" containerName="init" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.016879 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.021275 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.023760 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.047281 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.096371 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54778bbf88-5qkzn"] Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.120009 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.138085 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54778bbf88-5qkzn"] Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.190445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.191599 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-config-data\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.191655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-tls-certs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.191762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-combined-ca-bundle\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.191852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.191944 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192115 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-secret-key\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192148 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192179 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192208 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rts\" (UniqueName: \"kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-scripts\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192287 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mbp\" (UniqueName: \"kubernetes.io/projected/f9249911-c670-4cc4-895b-8c3a15d90d6f-kube-api-access-v8mbp\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.192326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9249911-c670-4cc4-895b-8c3a15d90d6f-logs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-secret-key\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294454 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2rts\" (UniqueName: \"kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294507 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294530 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-scripts\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294552 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mbp\" (UniqueName: \"kubernetes.io/projected/f9249911-c670-4cc4-895b-8c3a15d90d6f-kube-api-access-v8mbp\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294574 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9249911-c670-4cc4-895b-8c3a15d90d6f-logs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294645 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-config-data\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294690 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-tls-certs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294722 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-combined-ca-bundle\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294754 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.294778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.296455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9249911-c670-4cc4-895b-8c3a15d90d6f-logs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.297520 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-scripts\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.299281 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9249911-c670-4cc4-895b-8c3a15d90d6f-config-data\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.302387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-secret-key\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.306491 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-combined-ca-bundle\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.309420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9249911-c670-4cc4-895b-8c3a15d90d6f-horizon-tls-certs\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.314220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mbp\" (UniqueName: \"kubernetes.io/projected/f9249911-c670-4cc4-895b-8c3a15d90d6f-kube-api-access-v8mbp\") pod \"horizon-54778bbf88-5qkzn\" (UID: \"f9249911-c670-4cc4-895b-8c3a15d90d6f\") " pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.327055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.335817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.336006 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.336172 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.336282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.336545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.449726 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.507727 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:33460->10.217.0.149:9322: read: connection reset by peer" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.632581 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2rts\" (UniqueName: \"kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts\") pod \"horizon-58b77c584b-9tl65\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:19 crc kubenswrapper[4698]: I0127 14:51:19.654361 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:51:20 crc kubenswrapper[4698]: I0127 14:51:20.495557 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Jan 27 14:51:20 crc kubenswrapper[4698]: I0127 14:51:20.500181 4698 generic.go:334] "Generic (PLEG): container finished" podID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerID="19c4a3586c07df0a5794b89430cb466ba99d57ff7b1eaa9d97f480b134b64c03" exitCode=0 Jan 27 14:51:20 crc kubenswrapper[4698]: I0127 14:51:20.500223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerDied","Data":"19c4a3586c07df0a5794b89430cb466ba99d57ff7b1eaa9d97f480b134b64c03"} Jan 27 14:51:21 crc kubenswrapper[4698]: I0127 14:51:21.471662 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:51:21 crc kubenswrapper[4698]: I0127 14:51:21.554964 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:51:21 crc kubenswrapper[4698]: I0127 14:51:21.555634 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" containerID="cri-o://843cbe300a2ca1e4f60afb2bcfdf54ac90525e46c383a5face8ae0f7e054c2fc" gracePeriod=10 Jan 27 14:51:22 crc kubenswrapper[4698]: I0127 14:51:22.521975 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerID="843cbe300a2ca1e4f60afb2bcfdf54ac90525e46c383a5face8ae0f7e054c2fc" exitCode=0 Jan 27 14:51:22 crc kubenswrapper[4698]: I0127 14:51:22.522119 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" event={"ID":"ee0893f2-f99d-4923-a85e-c0d764abff34","Type":"ContainerDied","Data":"843cbe300a2ca1e4f60afb2bcfdf54ac90525e46c383a5face8ae0f7e054c2fc"} Jan 27 14:51:23 crc kubenswrapper[4698]: I0127 14:51:23.535031 4698 generic.go:334] "Generic (PLEG): container finished" podID="69ce5b4f-ff07-4f6c-8c15-bef96f1be728" containerID="87187a5a2296882e19cec5a45ad68dc3000dfce0034be1365dd20f36574d0e1f" exitCode=0 Jan 27 14:51:23 crc kubenswrapper[4698]: I0127 14:51:23.535091 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwldv" event={"ID":"69ce5b4f-ff07-4f6c-8c15-bef96f1be728","Type":"ContainerDied","Data":"87187a5a2296882e19cec5a45ad68dc3000dfce0034be1365dd20f36574d0e1f"} Jan 27 14:51:26 crc kubenswrapper[4698]: I0127 14:51:26.351986 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 27 14:51:27 crc kubenswrapper[4698]: I0127 14:51:27.452471 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:51:27 crc kubenswrapper[4698]: I0127 14:51:27.452750 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:51:30 crc kubenswrapper[4698]: I0127 14:51:30.496386 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:51:30 crc kubenswrapper[4698]: I0127 14:51:30.497035 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:51:31 crc kubenswrapper[4698]: I0127 14:51:31.351934 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 27 14:51:35 crc kubenswrapper[4698]: I0127 14:51:35.497748 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:51:36 crc kubenswrapper[4698]: I0127 14:51:36.352428 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 27 14:51:36 crc kubenswrapper[4698]: I0127 14:51:36.352817 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:51:40 crc kubenswrapper[4698]: I0127 14:51:40.500662 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:51:41 crc kubenswrapper[4698]: I0127 14:51:41.352366 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.158149 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.158204 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.158323 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h5fch5dchbdh99hfdh68bh5dbh68h664h7h68ch9fh544h655h687h7hc6h67bh696hcfh675hbch694h684h89h549h67fh5b8h5b7h5b6h675q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdvlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75cdd6b9b5-7lj2z_openstack(4e0c2380-35df-4719-8881-546b69c6225a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.168037 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-75cdd6b9b5-7lj2z" podUID="4e0c2380-35df-4719-8881-546b69c6225a" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.174539 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.174590 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.174815 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55h547hfh5d8h75h569h6dh555h5dh5ddh66ch5b9h698h8ch698h6dh58hc9h645h54bhb8h694h5cch595h56dh56ch7dhc5h64dhfbh5dfhd6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzx6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76b57fc957-f9qxf_openstack(732c4db7-cf20-4516-8ddf-40c801a8cf48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.180590 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-76b57fc957-f9qxf" podUID="732c4db7-cf20-4516-8ddf-40c801a8cf48" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.196009 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.196065 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.196181 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h5d9hf6h5cch5c4h5ddh95h7fhbch5f8hc6h557h54fh7fh5f5h7ch58fhb9hfbh594h56h598h586h646hb4hbfh5b9hb4h5bbh5dh79h5c6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spjkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56b6f76549-v2fjv_openstack(d531ab8d-5ac9-4a51-8044-c68a217e1843): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.200575 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-56b6f76549-v2fjv" podUID="d531ab8d-5ac9-4a51-8044-c68a217e1843" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.263692 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.272914 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361448 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361494 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle\") pod \"cb72f892-e99c-447a-aea6-9529b57b01ac\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361559 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data\") pod \"cb72f892-e99c-447a-aea6-9529b57b01ac\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361732 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361765 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs\") pod \"cb72f892-e99c-447a-aea6-9529b57b01ac\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmj5t\" (UniqueName: \"kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t\") pod \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\" (UID: \"69ce5b4f-ff07-4f6c-8c15-bef96f1be728\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.361968 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca\") pod \"cb72f892-e99c-447a-aea6-9529b57b01ac\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.362001 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5m5t\" (UniqueName: \"kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t\") pod \"cb72f892-e99c-447a-aea6-9529b57b01ac\" (UID: \"cb72f892-e99c-447a-aea6-9529b57b01ac\") " Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.362306 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs" (OuterVolumeSpecName: "logs") pod "cb72f892-e99c-447a-aea6-9529b57b01ac" (UID: "cb72f892-e99c-447a-aea6-9529b57b01ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.362535 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb72f892-e99c-447a-aea6-9529b57b01ac-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.372285 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.375723 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t" (OuterVolumeSpecName: "kube-api-access-p5m5t") pod "cb72f892-e99c-447a-aea6-9529b57b01ac" (UID: "cb72f892-e99c-447a-aea6-9529b57b01ac"). InnerVolumeSpecName "kube-api-access-p5m5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.377006 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts" (OuterVolumeSpecName: "scripts") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.378979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t" (OuterVolumeSpecName: "kube-api-access-nmj5t") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "kube-api-access-nmj5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.395760 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.396903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb72f892-e99c-447a-aea6-9529b57b01ac" (UID: "cb72f892-e99c-447a-aea6-9529b57b01ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.398929 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.416054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cb72f892-e99c-447a-aea6-9529b57b01ac" (UID: "cb72f892-e99c-447a-aea6-9529b57b01ac"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.420735 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data" (OuterVolumeSpecName: "config-data") pod "69ce5b4f-ff07-4f6c-8c15-bef96f1be728" (UID: "69ce5b4f-ff07-4f6c-8c15-bef96f1be728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.427148 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data" (OuterVolumeSpecName: "config-data") pod "cb72f892-e99c-447a-aea6-9529b57b01ac" (UID: "cb72f892-e99c-447a-aea6-9529b57b01ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464134 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464176 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464189 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464200 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464213 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464224 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464235 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmj5t\" (UniqueName: \"kubernetes.io/projected/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-kube-api-access-nmj5t\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464248 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cb72f892-e99c-447a-aea6-9529b57b01ac-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464259 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5m5t\" (UniqueName: \"kubernetes.io/projected/cb72f892-e99c-447a-aea6-9529b57b01ac-kube-api-access-p5m5t\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.464269 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ce5b4f-ff07-4f6c-8c15-bef96f1be728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.700009 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.700167 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.700487 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.111:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dbh68ch58h5hc8h5c7h5b6hd4h684h565h548hc8h85h557h5ffh64fh578hb8h599h596h546h68dh75h5c7h57chcdhf6h5cbh87h694h76h648q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4t7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0f9b9cd1-a9b3-4764-a897-44de30ff90ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.701197 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cb72f892-e99c-447a-aea6-9529b57b01ac","Type":"ContainerDied","Data":"93044713432b1070cf4616ffcd8070c09c26bd59c950766b305510987b28fa59"} Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.701283 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.701424 4698 scope.go:117] "RemoveContainer" containerID="19c4a3586c07df0a5794b89430cb466ba99d57ff7b1eaa9d97f480b134b64c03" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.707304 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwldv" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.707976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwldv" event={"ID":"69ce5b4f-ff07-4f6c-8c15-bef96f1be728","Type":"ContainerDied","Data":"ec429d15ee79b280d5a252f7afc0d313ce8fed71e62794c89b2ed3a8680d3039"} Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.712396 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec429d15ee79b280d5a252f7afc0d313ce8fed71e62794c89b2ed3a8680d3039" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.839717 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.852928 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.874570 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.875156 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875187 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.875223 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ce5b4f-ff07-4f6c-8c15-bef96f1be728" containerName="keystone-bootstrap" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875232 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ce5b4f-ff07-4f6c-8c15-bef96f1be728" containerName="keystone-bootstrap" Jan 27 14:51:42 crc kubenswrapper[4698]: E0127 14:51:42.875258 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api-log" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875266 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api-log" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875496 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ce5b4f-ff07-4f6c-8c15-bef96f1be728" containerName="keystone-bootstrap" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875523 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api-log" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.875539 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.876794 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.883041 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.888360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.978722 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58tn\" (UniqueName: \"kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.978806 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.979028 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.979212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:42 crc kubenswrapper[4698]: I0127 14:51:42.979257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.003883 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" path="/var/lib/kubelet/pods/cb72f892-e99c-447a-aea6-9529b57b01ac/volumes" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.080921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.081001 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.081039 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58tn\" (UniqueName: \"kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.081086 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.081704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.081205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.095458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.102036 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.102282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58tn\" (UniqueName: \"kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.114614 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data\") pod \"watcher-api-0\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.202907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.387527 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nwldv"] Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.396315 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nwldv"] Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.490276 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pr5rl"] Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.491485 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.502370 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.502712 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.502866 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.503007 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.503155 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nk52t" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.505027 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pr5rl"] Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.590628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.590790 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.590820 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.590850 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.590874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwhb\" (UniqueName: \"kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.591089 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.692942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.693013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.693107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.693137 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.693171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.693187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnwhb\" (UniqueName: \"kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.697128 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.697457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.698531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.698714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.698805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.710147 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnwhb\" (UniqueName: \"kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb\") pod \"keystone-bootstrap-pr5rl\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:43 crc kubenswrapper[4698]: I0127 14:51:43.822859 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:51:44 crc kubenswrapper[4698]: I0127 14:51:44.725363 4698 generic.go:334] "Generic (PLEG): container finished" podID="b202d484-189a-4722-93b1-f72348e74aa4" containerID="ff17803b8805dd7d8fe5951bb07bcb464516773fae7756b0f504bda3b2b5f3b0" exitCode=0 Jan 27 14:51:44 crc kubenswrapper[4698]: I0127 14:51:44.725757 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6z2gn" event={"ID":"b202d484-189a-4722-93b1-f72348e74aa4","Type":"ContainerDied","Data":"ff17803b8805dd7d8fe5951bb07bcb464516773fae7756b0f504bda3b2b5f3b0"} Jan 27 14:51:45 crc kubenswrapper[4698]: I0127 14:51:45.022851 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ce5b4f-ff07-4f6c-8c15-bef96f1be728" path="/var/lib/kubelet/pods/69ce5b4f-ff07-4f6c-8c15-bef96f1be728/volumes" Jan 27 14:51:45 crc kubenswrapper[4698]: I0127 14:51:45.501147 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cb72f892-e99c-447a-aea6-9529b57b01ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:51:51 crc kubenswrapper[4698]: I0127 14:51:51.353180 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.354743 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 27 14:51:56 crc kubenswrapper[4698]: E0127 14:51:56.518027 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 27 14:51:56 crc kubenswrapper[4698]: E0127 14:51:56.518537 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 27 14:51:56 crc kubenswrapper[4698]: E0127 14:51:56.518913 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.111:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhskr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9jhwb_openstack(51ba2ef6-17ab-4974-a2c6-7f995343e24b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:56 crc kubenswrapper[4698]: E0127 14:51:56.520262 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9jhwb" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.718170 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.723363 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.730289 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.748309 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.778554 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6z2gn" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840756 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts\") pod \"d531ab8d-5ac9-4a51-8044-c68a217e1843\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840806 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840829 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdvlk\" (UniqueName: \"kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk\") pod \"4e0c2380-35df-4719-8881-546b69c6225a\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840866 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzx6x\" (UniqueName: \"kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x\") pod \"732c4db7-cf20-4516-8ddf-40c801a8cf48\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840910 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs\") pod \"d531ab8d-5ac9-4a51-8044-c68a217e1843\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data\") pod \"4e0c2380-35df-4719-8881-546b69c6225a\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.840959 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key\") pod \"4e0c2380-35df-4719-8881-546b69c6225a\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841014 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts\") pod \"732c4db7-cf20-4516-8ddf-40c801a8cf48\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5jq\" (UniqueName: \"kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841064 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841088 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841115 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841168 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key\") pod \"d531ab8d-5ac9-4a51-8044-c68a217e1843\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts\") pod \"4e0c2380-35df-4719-8881-546b69c6225a\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841236 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data\") pod \"732c4db7-cf20-4516-8ddf-40c801a8cf48\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841299 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs\") pod \"4e0c2380-35df-4719-8881-546b69c6225a\" (UID: \"4e0c2380-35df-4719-8881-546b69c6225a\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841346 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spjkp\" (UniqueName: \"kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp\") pod \"d531ab8d-5ac9-4a51-8044-c68a217e1843\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841374 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts" (OuterVolumeSpecName: "scripts") pod "d531ab8d-5ac9-4a51-8044-c68a217e1843" (UID: "d531ab8d-5ac9-4a51-8044-c68a217e1843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841394 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs\") pod \"732c4db7-cf20-4516-8ddf-40c801a8cf48\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841431 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data\") pod \"d531ab8d-5ac9-4a51-8044-c68a217e1843\" (UID: \"d531ab8d-5ac9-4a51-8044-c68a217e1843\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841457 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0\") pod \"ee0893f2-f99d-4923-a85e-c0d764abff34\" (UID: \"ee0893f2-f99d-4923-a85e-c0d764abff34\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.841518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key\") pod \"732c4db7-cf20-4516-8ddf-40c801a8cf48\" (UID: \"732c4db7-cf20-4516-8ddf-40c801a8cf48\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.842118 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.842295 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data" (OuterVolumeSpecName: "config-data") pod "732c4db7-cf20-4516-8ddf-40c801a8cf48" (UID: "732c4db7-cf20-4516-8ddf-40c801a8cf48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.842672 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs" (OuterVolumeSpecName: "logs") pod "d531ab8d-5ac9-4a51-8044-c68a217e1843" (UID: "d531ab8d-5ac9-4a51-8044-c68a217e1843"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.844333 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs" (OuterVolumeSpecName: "logs") pod "732c4db7-cf20-4516-8ddf-40c801a8cf48" (UID: "732c4db7-cf20-4516-8ddf-40c801a8cf48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.844731 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs" (OuterVolumeSpecName: "logs") pod "4e0c2380-35df-4719-8881-546b69c6225a" (UID: "4e0c2380-35df-4719-8881-546b69c6225a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.845718 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data" (OuterVolumeSpecName: "config-data") pod "4e0c2380-35df-4719-8881-546b69c6225a" (UID: "4e0c2380-35df-4719-8881-546b69c6225a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.845967 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" event={"ID":"ee0893f2-f99d-4923-a85e-c0d764abff34","Type":"ContainerDied","Data":"b286c807651340722eb50523551818a39bf96fc1b45b035fea24a1f566f6d49c"} Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.846130 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.847747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq" (OuterVolumeSpecName: "kube-api-access-hj5jq") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "kube-api-access-hj5jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.848111 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts" (OuterVolumeSpecName: "scripts") pod "732c4db7-cf20-4516-8ddf-40c801a8cf48" (UID: "732c4db7-cf20-4516-8ddf-40c801a8cf48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.848695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data" (OuterVolumeSpecName: "config-data") pod "d531ab8d-5ac9-4a51-8044-c68a217e1843" (UID: "d531ab8d-5ac9-4a51-8044-c68a217e1843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.848810 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d531ab8d-5ac9-4a51-8044-c68a217e1843" (UID: "d531ab8d-5ac9-4a51-8044-c68a217e1843"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.851546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "732c4db7-cf20-4516-8ddf-40c801a8cf48" (UID: "732c4db7-cf20-4516-8ddf-40c801a8cf48"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.852275 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp" (OuterVolumeSpecName: "kube-api-access-spjkp") pod "d531ab8d-5ac9-4a51-8044-c68a217e1843" (UID: "d531ab8d-5ac9-4a51-8044-c68a217e1843"). InnerVolumeSpecName "kube-api-access-spjkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.852310 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts" (OuterVolumeSpecName: "scripts") pod "4e0c2380-35df-4719-8881-546b69c6225a" (UID: "4e0c2380-35df-4719-8881-546b69c6225a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.854121 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk" (OuterVolumeSpecName: "kube-api-access-vdvlk") pod "4e0c2380-35df-4719-8881-546b69c6225a" (UID: "4e0c2380-35df-4719-8881-546b69c6225a"). InnerVolumeSpecName "kube-api-access-vdvlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.854605 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x" (OuterVolumeSpecName: "kube-api-access-nzx6x") pod "732c4db7-cf20-4516-8ddf-40c801a8cf48" (UID: "732c4db7-cf20-4516-8ddf-40c801a8cf48"). InnerVolumeSpecName "kube-api-access-nzx6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.857501 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4e0c2380-35df-4719-8881-546b69c6225a" (UID: "4e0c2380-35df-4719-8881-546b69c6225a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.867807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76b57fc957-f9qxf" event={"ID":"732c4db7-cf20-4516-8ddf-40c801a8cf48","Type":"ContainerDied","Data":"47a95f13c98199b6061c8d601ea1d7d4451e12d384aed676f385cec63d01e259"} Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.867833 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76b57fc957-f9qxf" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.869243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b6f76549-v2fjv" event={"ID":"d531ab8d-5ac9-4a51-8044-c68a217e1843","Type":"ContainerDied","Data":"dc57cdab5c42177ac31a2a0cf472e20708241594805be2beaeed904c484d1a44"} Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.869319 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b6f76549-v2fjv" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.871360 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6z2gn" event={"ID":"b202d484-189a-4722-93b1-f72348e74aa4","Type":"ContainerDied","Data":"b77d301a13f41fbe10f129be11b4da596e99b9ae56edd1ecff780f46137fdb71"} Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.871398 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77d301a13f41fbe10f129be11b4da596e99b9ae56edd1ecff780f46137fdb71" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.871456 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6z2gn" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.873345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cdd6b9b5-7lj2z" event={"ID":"4e0c2380-35df-4719-8881-546b69c6225a","Type":"ContainerDied","Data":"efbd7f9e56ed7135364acc98cd66cc050d95c697bf4ea877b79cec1304b08c10"} Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.873404 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cdd6b9b5-7lj2z" Jan 27 14:51:56 crc kubenswrapper[4698]: E0127 14:51:56.876273 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-9jhwb" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.910168 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.914521 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config" (OuterVolumeSpecName: "config") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.916077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.916499 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.922621 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee0893f2-f99d-4923-a85e-c0d764abff34" (UID: "ee0893f2-f99d-4923-a85e-c0d764abff34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.942965 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle\") pod \"b202d484-189a-4722-93b1-f72348e74aa4\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data\") pod \"b202d484-189a-4722-93b1-f72348e74aa4\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943234 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data\") pod \"b202d484-189a-4722-93b1-f72348e74aa4\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxfgp\" (UniqueName: \"kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp\") pod \"b202d484-189a-4722-93b1-f72348e74aa4\" (UID: \"b202d484-189a-4722-93b1-f72348e74aa4\") " Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943858 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943877 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d531ab8d-5ac9-4a51-8044-c68a217e1843-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943890 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943900 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c2380-35df-4719-8881-546b69c6225a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943912 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spjkp\" (UniqueName: \"kubernetes.io/projected/d531ab8d-5ac9-4a51-8044-c68a217e1843-kube-api-access-spjkp\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943922 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732c4db7-cf20-4516-8ddf-40c801a8cf48-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943932 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d531ab8d-5ac9-4a51-8044-c68a217e1843-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943942 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943974 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/732c4db7-cf20-4516-8ddf-40c801a8cf48-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943984 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.943994 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdvlk\" (UniqueName: \"kubernetes.io/projected/4e0c2380-35df-4719-8881-546b69c6225a-kube-api-access-vdvlk\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944004 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzx6x\" (UniqueName: \"kubernetes.io/projected/732c4db7-cf20-4516-8ddf-40c801a8cf48-kube-api-access-nzx6x\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944014 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d531ab8d-5ac9-4a51-8044-c68a217e1843-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944023 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e0c2380-35df-4719-8881-546b69c6225a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944034 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e0c2380-35df-4719-8881-546b69c6225a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944046 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/732c4db7-cf20-4516-8ddf-40c801a8cf48-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944055 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj5jq\" (UniqueName: \"kubernetes.io/projected/ee0893f2-f99d-4923-a85e-c0d764abff34-kube-api-access-hj5jq\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944065 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944075 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.944086 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0893f2-f99d-4923-a85e-c0d764abff34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.949705 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b202d484-189a-4722-93b1-f72348e74aa4" (UID: "b202d484-189a-4722-93b1-f72348e74aa4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.949799 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp" (OuterVolumeSpecName: "kube-api-access-hxfgp") pod "b202d484-189a-4722-93b1-f72348e74aa4" (UID: "b202d484-189a-4722-93b1-f72348e74aa4"). InnerVolumeSpecName "kube-api-access-hxfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:56 crc kubenswrapper[4698]: I0127 14:51:56.977631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b202d484-189a-4722-93b1-f72348e74aa4" (UID: "b202d484-189a-4722-93b1-f72348e74aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.047969 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data" (OuterVolumeSpecName: "config-data") pod "b202d484-189a-4722-93b1-f72348e74aa4" (UID: "b202d484-189a-4722-93b1-f72348e74aa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.048123 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.048817 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxfgp\" (UniqueName: \"kubernetes.io/projected/b202d484-189a-4722-93b1-f72348e74aa4-kube-api-access-hxfgp\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.048837 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.066889 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.109403 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76b57fc957-f9qxf"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.140712 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.149956 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b202d484-189a-4722-93b1-f72348e74aa4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.150740 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56b6f76549-v2fjv"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.170807 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.178576 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75cdd6b9b5-7lj2z"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.187746 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.195848 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-544c49b98f-4lvts"] Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.452077 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:51:57 crc kubenswrapper[4698]: I0127 14:51:57.452157 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.211361 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.212164 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.212187 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.212210 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="init" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.212220 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="init" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.212230 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b202d484-189a-4722-93b1-f72348e74aa4" containerName="glance-db-sync" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.212239 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b202d484-189a-4722-93b1-f72348e74aa4" containerName="glance-db-sync" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.212521 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.212545 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b202d484-189a-4722-93b1-f72348e74aa4" containerName="glance-db-sync" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.217885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.223330 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.371975 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxl8\" (UniqueName: \"kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.372055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.372121 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.372182 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.372220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.372260 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.449457 4698 scope.go:117] "RemoveContainer" containerID="ae0c1629d9f9b853b835305af0cd826781af79ee6d7c6e3c2ebdacecd2238a97" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474539 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474597 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474650 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxl8\" (UniqueName: \"kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.474728 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.475796 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.476413 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.476528 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.476943 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.477597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.503333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxl8\" (UniqueName: \"kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8\") pod \"dnsmasq-dns-79c8598659-pm5vk\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: I0127 14:51:58.538867 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.710173 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.710291 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.710525 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.111:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcpmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-mcnmn_openstack(74946770-13e5-4777-a645-bb6bee73c277): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.712482 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-mcnmn" podUID="74946770-13e5-4777-a645-bb6bee73c277" Jan 27 14:51:58 crc kubenswrapper[4698]: E0127 14:51:58.896061 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-mcnmn" podUID="74946770-13e5-4777-a645-bb6bee73c277" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.017665 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0c2380-35df-4719-8881-546b69c6225a" path="/var/lib/kubelet/pods/4e0c2380-35df-4719-8881-546b69c6225a/volumes" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.030961 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="732c4db7-cf20-4516-8ddf-40c801a8cf48" path="/var/lib/kubelet/pods/732c4db7-cf20-4516-8ddf-40c801a8cf48/volumes" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.031435 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d531ab8d-5ac9-4a51-8044-c68a217e1843" path="/var/lib/kubelet/pods/d531ab8d-5ac9-4a51-8044-c68a217e1843/volumes" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.031800 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" path="/var/lib/kubelet/pods/ee0893f2-f99d-4923-a85e-c0d764abff34/volumes" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.297160 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.299035 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.301436 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qwq5p" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.303232 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.303706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.312238 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.390805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27jp\" (UniqueName: \"kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.390883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.390996 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.391027 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.391086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.391120 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.391158 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.432538 4698 scope.go:117] "RemoveContainer" containerID="843cbe300a2ca1e4f60afb2bcfdf54ac90525e46c383a5face8ae0f7e054c2fc" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.485288 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.490535 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.493713 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495241 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g27jp\" (UniqueName: \"kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.495762 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.496126 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.497892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.499064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.531850 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.541082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.544409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g27jp\" (UniqueName: \"kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.545291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.545977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.554160 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.596873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.597168 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.597270 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.597375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.597492 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.597614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.598091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cpx\" (UniqueName: \"kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.621428 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702080 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702417 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702562 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702627 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cpx\" (UniqueName: \"kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702699 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702746 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.702774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.706686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.706686 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.706979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.710945 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.729454 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.731011 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.740757 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cpx\" (UniqueName: \"kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.788151 4698 scope.go:117] "RemoveContainer" containerID="fc17424f90b89f4fee6ff4e08da14ecfebdb310c748b905fb17be04a5b571a97" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.808000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:51:59 crc kubenswrapper[4698]: I0127 14:51:59.878420 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:00 crc kubenswrapper[4698]: I0127 14:52:00.027560 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:52:00 crc kubenswrapper[4698]: W0127 14:52:00.081893 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2573c021_b642_4659_a97b_8c06bcf54afc.slice/crio-d6e247885db25606af135cf6591aeddb06a436393cb703f9a68720bb11018475 WatchSource:0}: Error finding container d6e247885db25606af135cf6591aeddb06a436393cb703f9a68720bb11018475: Status 404 returned error can't find the container with id d6e247885db25606af135cf6591aeddb06a436393cb703f9a68720bb11018475 Jan 27 14:52:00 crc kubenswrapper[4698]: I0127 14:52:00.107306 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54778bbf88-5qkzn"] Jan 27 14:52:00 crc kubenswrapper[4698]: I0127 14:52:00.341794 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:00 crc kubenswrapper[4698]: I0127 14:52:00.365472 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pr5rl"] Jan 27 14:52:00 crc kubenswrapper[4698]: I0127 14:52:00.375575 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:52:00 crc kubenswrapper[4698]: W0127 14:52:00.396380 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cff3663_449e_40a4_890d_7eaff54a5973.slice/crio-10d1d4a923216a6759651750d5dcf3136a986e5d09c2776038dbc2f4d4507dc8 WatchSource:0}: Error finding container 10d1d4a923216a6759651750d5dcf3136a986e5d09c2776038dbc2f4d4507dc8: Status 404 returned error can't find the container with id 10d1d4a923216a6759651750d5dcf3136a986e5d09c2776038dbc2f4d4507dc8 Jan 27 14:52:00 crc kubenswrapper[4698]: W0127 14:52:00.402277 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ee879ea_5497_4147_ab6d_5e352fda0d9f.slice/crio-49070fb6464e3dabb3a8657e5dce704170f52641d47045a617799f32548598e3 WatchSource:0}: Error finding container 49070fb6464e3dabb3a8657e5dce704170f52641d47045a617799f32548598e3: Status 404 returned error can't find the container with id 49070fb6464e3dabb3a8657e5dce704170f52641d47045a617799f32548598e3 Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.585980 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:01 crc kubenswrapper[4698]: W0127 14:52:00.651947 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82e845e_ad64_4e92_a6e6_061b14781155.slice/crio-789327419b3afd15c2a93a5b4060b02d01fe7ba250c47aa5e3d7dcf08ff39a60 WatchSource:0}: Error finding container 789327419b3afd15c2a93a5b4060b02d01fe7ba250c47aa5e3d7dcf08ff39a60: Status 404 returned error can't find the container with id 789327419b3afd15c2a93a5b4060b02d01fe7ba250c47aa5e3d7dcf08ff39a60 Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.786920 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:01 crc kubenswrapper[4698]: W0127 14:52:00.805912 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8faab9a7_c335_4f18_8ea1_311b5c567b31.slice/crio-1ba28a3c5406a4205db3c9d43453530f2a1f2986398a515780d64639677e778d WatchSource:0}: Error finding container 1ba28a3c5406a4205db3c9d43453530f2a1f2986398a515780d64639677e778d: Status 404 returned error can't find the container with id 1ba28a3c5406a4205db3c9d43453530f2a1f2986398a515780d64639677e778d Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.937772 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54778bbf88-5qkzn" event={"ID":"f9249911-c670-4cc4-895b-8c3a15d90d6f","Type":"ContainerStarted","Data":"3f3fe3dc1e4ae9c9adc6cca1a43c3045fafba7a95a8a73219bae0fd76032a789"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.937823 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54778bbf88-5qkzn" event={"ID":"f9249911-c670-4cc4-895b-8c3a15d90d6f","Type":"ContainerStarted","Data":"3ed79f095df0e8b4bb54cf65921dc1cc7f33797e14766354836cdfe80fa4e288"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.940170 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerStarted","Data":"5e7f8b680f5f6e9b4074e4abd90191a37019e5bbf518e722493554092019665c"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.943038 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"445b01d2-0375-432b-808d-4045eb66c5da","Type":"ContainerStarted","Data":"f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.979867 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=20.252617417 podStartE2EDuration="50.979836293s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="2026-01-27 14:51:11.949783796 +0000 UTC m=+1327.626561261" lastFinishedPulling="2026-01-27 14:51:42.677002672 +0000 UTC m=+1358.353780137" observedRunningTime="2026-01-27 14:52:00.970537927 +0000 UTC m=+1376.647315412" watchObservedRunningTime="2026-01-27 14:52:00.979836293 +0000 UTC m=+1376.656613758" Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.980013 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerStarted","Data":"789327419b3afd15c2a93a5b4060b02d01fe7ba250c47aa5e3d7dcf08ff39a60"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:00.991235 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pr5rl" event={"ID":"2fcd6ee1-8d2c-490d-8fd4-b582c497f336","Type":"ContainerStarted","Data":"44ddfbf7295a96c64ea12bbd5051b08bb0a448ad0c444b3ff9d42a706ffa5cd7"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.030130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerStarted","Data":"10d1d4a923216a6759651750d5dcf3136a986e5d09c2776038dbc2f4d4507dc8"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.030165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerStarted","Data":"1ba28a3c5406a4205db3c9d43453530f2a1f2986398a515780d64639677e778d"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.030177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerStarted","Data":"d6e247885db25606af135cf6591aeddb06a436393cb703f9a68720bb11018475"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.030189 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" event={"ID":"9ee879ea-5497-4147-ab6d-5e352fda0d9f","Type":"ContainerStarted","Data":"49070fb6464e3dabb3a8657e5dce704170f52641d47045a617799f32548598e3"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.040462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s4fks" event={"ID":"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe","Type":"ContainerStarted","Data":"04b54089d35cbda06ca5e8923f174f55591e3add4e7eb6362a6681256322cf0b"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.068713 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s4fks" podStartSLOduration=7.107446332 podStartE2EDuration="51.068692352s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="2026-01-27 14:51:12.612908669 +0000 UTC m=+1328.289686144" lastFinishedPulling="2026-01-27 14:51:56.574154699 +0000 UTC m=+1372.250932164" observedRunningTime="2026-01-27 14:52:01.063249449 +0000 UTC m=+1376.740026924" watchObservedRunningTime="2026-01-27 14:52:01.068692352 +0000 UTC m=+1376.745469837" Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.080760 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c198be7c-95a9-47ea-80fd-252e5d8d9ac9","Type":"ContainerStarted","Data":"bb8fb01ad6c77ac6cc30475378c33705f30a11e02a8606e8f6ff5395462bd2fb"} Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.118375 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.186325079 podStartE2EDuration="51.11835304s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="2026-01-27 14:51:12.642352254 +0000 UTC m=+1328.319129719" lastFinishedPulling="2026-01-27 14:51:56.574380215 +0000 UTC m=+1372.251157680" observedRunningTime="2026-01-27 14:52:01.102830362 +0000 UTC m=+1376.779607857" watchObservedRunningTime="2026-01-27 14:52:01.11835304 +0000 UTC m=+1376.795130505" Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.357217 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-544c49b98f-4lvts" podUID="ee0893f2-f99d-4923-a85e-c0d764abff34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.699496 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:01 crc kubenswrapper[4698]: I0127 14:52:01.800428 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.098361 4698 generic.go:334] "Generic (PLEG): container finished" podID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerID="da0152041a608c7d64094ceeec76b3e69a115d6aed8fb115eaf0a8b44b3b7819" exitCode=0 Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.098712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" event={"ID":"9ee879ea-5497-4147-ab6d-5e352fda0d9f","Type":"ContainerDied","Data":"da0152041a608c7d64094ceeec76b3e69a115d6aed8fb115eaf0a8b44b3b7819"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.102936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pr5rl" event={"ID":"2fcd6ee1-8d2c-490d-8fd4-b582c497f336","Type":"ContainerStarted","Data":"d058386674e08a8c4f0250d995ae1b6ee9fdffdc7441edd08de85c6565e35ff7"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.105571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerStarted","Data":"391e9ed54be68bba95bc943d4b1749bdd3985fdd08ada5ed5eaeb7e44becb0fc"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.105697 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerStarted","Data":"3191ad124be801d51641db95a26fd5fe334822fe6a6d2722c7a4f718a2e4738b"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.106186 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.125164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54778bbf88-5qkzn" event={"ID":"f9249911-c670-4cc4-895b-8c3a15d90d6f","Type":"ContainerStarted","Data":"b70fba9870924b61f3d07549ec76790cd4a35e0ce5b33665fa8f1270253f99d4"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.133165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerStarted","Data":"5854850d7e012ab315ae8c17136e56826d530bb56bfd3db7e41a47ebec3633a1"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.133207 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerStarted","Data":"1dd9e638670767afe147060b8b877b23827741ca99584d820b8e17d9715e1c86"} Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.161394 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=20.161375768 podStartE2EDuration="20.161375768s" podCreationTimestamp="2026-01-27 14:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:02.14133943 +0000 UTC m=+1377.818116895" watchObservedRunningTime="2026-01-27 14:52:02.161375768 +0000 UTC m=+1377.838153233" Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.217855 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pr5rl" podStartSLOduration=19.217835975 podStartE2EDuration="19.217835975s" podCreationTimestamp="2026-01-27 14:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:02.171130175 +0000 UTC m=+1377.847907630" watchObservedRunningTime="2026-01-27 14:52:02.217835975 +0000 UTC m=+1377.894613430" Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.222261 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54778bbf88-5qkzn" podStartSLOduration=42.894988453 podStartE2EDuration="43.222239611s" podCreationTimestamp="2026-01-27 14:51:19 +0000 UTC" firstStartedPulling="2026-01-27 14:52:00.153806789 +0000 UTC m=+1375.830584254" lastFinishedPulling="2026-01-27 14:52:00.481057947 +0000 UTC m=+1376.157835412" observedRunningTime="2026-01-27 14:52:02.204566325 +0000 UTC m=+1377.881343800" watchObservedRunningTime="2026-01-27 14:52:02.222239611 +0000 UTC m=+1377.899017076" Jan 27 14:52:02 crc kubenswrapper[4698]: I0127 14:52:02.251429 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58b77c584b-9tl65" podStartSLOduration=43.853698955 podStartE2EDuration="44.251403709s" podCreationTimestamp="2026-01-27 14:51:18 +0000 UTC" firstStartedPulling="2026-01-27 14:52:00.084807421 +0000 UTC m=+1375.761584886" lastFinishedPulling="2026-01-27 14:52:00.482512185 +0000 UTC m=+1376.159289640" observedRunningTime="2026-01-27 14:52:02.230208141 +0000 UTC m=+1377.906985616" watchObservedRunningTime="2026-01-27 14:52:02.251403709 +0000 UTC m=+1377.928181174" Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.146048 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerStarted","Data":"cba21f280c9e6188102736ea92888cd8b94e57e2056c20c6a25399e517695a37"} Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.154535 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerStarted","Data":"cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d"} Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.163331 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" event={"ID":"9ee879ea-5497-4147-ab6d-5e352fda0d9f","Type":"ContainerStarted","Data":"0cf18963bd797e184b883954078eb8d75002dc4c64f806d8dae9ff5cb2051adc"} Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.163877 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.190762 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" podStartSLOduration=5.190739086 podStartE2EDuration="5.190739086s" podCreationTimestamp="2026-01-27 14:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:03.187546822 +0000 UTC m=+1378.864324287" watchObservedRunningTime="2026-01-27 14:52:03.190739086 +0000 UTC m=+1378.867516561" Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.203612 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 14:52:03 crc kubenswrapper[4698]: I0127 14:52:03.203677 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.201096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerStarted","Data":"e6d3497714c0fd43e14ef722737597c644ef4f3849f3892ab90e0ee6f12c3a12"} Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.201801 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-log" containerID="cri-o://cba21f280c9e6188102736ea92888cd8b94e57e2056c20c6a25399e517695a37" gracePeriod=30 Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.202717 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-httpd" containerID="cri-o://e6d3497714c0fd43e14ef722737597c644ef4f3849f3892ab90e0ee6f12c3a12" gracePeriod=30 Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.206717 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.208253 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-log" containerID="cri-o://cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d" gracePeriod=30 Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.208382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerStarted","Data":"7908c5f5bb51ca9e91009870ba79de3a33e05f345a1e6b0c2094ea01be292652"} Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.209151 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-httpd" containerID="cri-o://7908c5f5bb51ca9e91009870ba79de3a33e05f345a1e6b0c2094ea01be292652" gracePeriod=30 Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.247795 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.254043 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.254022357 podStartE2EDuration="6.254022357s" podCreationTimestamp="2026-01-27 14:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:04.243789548 +0000 UTC m=+1379.920567013" watchObservedRunningTime="2026-01-27 14:52:04.254022357 +0000 UTC m=+1379.930799822" Jan 27 14:52:04 crc kubenswrapper[4698]: I0127 14:52:04.279038 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.279013486 podStartE2EDuration="6.279013486s" podCreationTimestamp="2026-01-27 14:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:04.273591373 +0000 UTC m=+1379.950368848" watchObservedRunningTime="2026-01-27 14:52:04.279013486 +0000 UTC m=+1379.955790951" Jan 27 14:52:04 crc kubenswrapper[4698]: E0127 14:52:04.849497 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8faab9a7_c335_4f18_8ea1_311b5c567b31.slice/crio-conmon-cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.128689 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.217346 4698 generic.go:334] "Generic (PLEG): container finished" podID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerID="cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d" exitCode=143 Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.217432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerDied","Data":"cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d"} Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.219589 4698 generic.go:334] "Generic (PLEG): container finished" podID="c82e845e-ad64-4e92-a6e6-061b14781155" containerID="cba21f280c9e6188102736ea92888cd8b94e57e2056c20c6a25399e517695a37" exitCode=143 Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.219779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerDied","Data":"cba21f280c9e6188102736ea92888cd8b94e57e2056c20c6a25399e517695a37"} Jan 27 14:52:05 crc kubenswrapper[4698]: I0127 14:52:05.518924 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 27 14:52:07 crc kubenswrapper[4698]: I0127 14:52:07.255195 4698 generic.go:334] "Generic (PLEG): container finished" podID="c82e845e-ad64-4e92-a6e6-061b14781155" containerID="e6d3497714c0fd43e14ef722737597c644ef4f3849f3892ab90e0ee6f12c3a12" exitCode=0 Jan 27 14:52:07 crc kubenswrapper[4698]: I0127 14:52:07.255284 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerDied","Data":"e6d3497714c0fd43e14ef722737597c644ef4f3849f3892ab90e0ee6f12c3a12"} Jan 27 14:52:07 crc kubenswrapper[4698]: I0127 14:52:07.257937 4698 generic.go:334] "Generic (PLEG): container finished" podID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerID="7908c5f5bb51ca9e91009870ba79de3a33e05f345a1e6b0c2094ea01be292652" exitCode=0 Jan 27 14:52:07 crc kubenswrapper[4698]: I0127 14:52:07.258021 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerDied","Data":"7908c5f5bb51ca9e91009870ba79de3a33e05f345a1e6b0c2094ea01be292652"} Jan 27 14:52:08 crc kubenswrapper[4698]: I0127 14:52:08.540655 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:52:08 crc kubenswrapper[4698]: I0127 14:52:08.610252 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:52:08 crc kubenswrapper[4698]: I0127 14:52:08.610497 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" containerID="cri-o://bfa7799b273e8188d1172c651671df1f5d51924a2d8d478434b2876fc6b90604" gracePeriod=10 Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.279479 4698 generic.go:334] "Generic (PLEG): container finished" podID="705a412d-9e58-46df-8084-cb719c4c5b57" containerID="bfa7799b273e8188d1172c651671df1f5d51924a2d8d478434b2876fc6b90604" exitCode=0 Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.279554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" event={"ID":"705a412d-9e58-46df-8084-cb719c4c5b57","Type":"ContainerDied","Data":"bfa7799b273e8188d1172c651671df1f5d51924a2d8d478434b2876fc6b90604"} Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.450056 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.451159 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.656077 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:52:09 crc kubenswrapper[4698]: I0127 14:52:09.657179 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:52:10 crc kubenswrapper[4698]: I0127 14:52:10.518968 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 27 14:52:10 crc kubenswrapper[4698]: I0127 14:52:10.549934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.065849 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.065905 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.093070 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.329025 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.330768 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.378626 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.404819 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:11 crc kubenswrapper[4698]: I0127 14:52:11.471224 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 27 14:52:13 crc kubenswrapper[4698]: I0127 14:52:13.212397 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 14:52:13 crc kubenswrapper[4698]: I0127 14:52:13.220117 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:52:13 crc kubenswrapper[4698]: I0127 14:52:13.319930 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" containerID="cri-o://f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" gracePeriod=30 Jan 27 14:52:13 crc kubenswrapper[4698]: I0127 14:52:13.320473 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" containerName="watcher-decision-engine" containerID="cri-o://bb8fb01ad6c77ac6cc30475378c33705f30a11e02a8606e8f6ff5395462bd2fb" gracePeriod=30 Jan 27 14:52:15 crc kubenswrapper[4698]: E0127 14:52:15.520411 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:15 crc kubenswrapper[4698]: E0127 14:52:15.522077 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:15 crc kubenswrapper[4698]: E0127 14:52:15.523319 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:15 crc kubenswrapper[4698]: E0127 14:52:15.523384 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:16 crc kubenswrapper[4698]: I0127 14:52:16.254101 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:16 crc kubenswrapper[4698]: I0127 14:52:16.255176 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" containerID="cri-o://391e9ed54be68bba95bc943d4b1749bdd3985fdd08ada5ed5eaeb7e44becb0fc" gracePeriod=30 Jan 27 14:52:16 crc kubenswrapper[4698]: I0127 14:52:16.255206 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api" containerID="cri-o://3191ad124be801d51641db95a26fd5fe334822fe6a6d2722c7a4f718a2e4738b" gracePeriod=30 Jan 27 14:52:16 crc kubenswrapper[4698]: I0127 14:52:16.470988 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 27 14:52:17 crc kubenswrapper[4698]: I0127 14:52:17.367669 4698 generic.go:334] "Generic (PLEG): container finished" podID="1cff3663-449e-40a4-890d-7eaff54a5973" containerID="391e9ed54be68bba95bc943d4b1749bdd3985fdd08ada5ed5eaeb7e44becb0fc" exitCode=143 Jan 27 14:52:17 crc kubenswrapper[4698]: I0127 14:52:17.367707 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerDied","Data":"391e9ed54be68bba95bc943d4b1749bdd3985fdd08ada5ed5eaeb7e44becb0fc"} Jan 27 14:52:18 crc kubenswrapper[4698]: I0127 14:52:18.204147 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": dial tcp 10.217.0.163:9322: connect: connection refused" Jan 27 14:52:18 crc kubenswrapper[4698]: I0127 14:52:18.204166 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": dial tcp 10.217.0.163:9322: connect: connection refused" Jan 27 14:52:18 crc kubenswrapper[4698]: I0127 14:52:18.379742 4698 generic.go:334] "Generic (PLEG): container finished" podID="1cff3663-449e-40a4-890d-7eaff54a5973" containerID="3191ad124be801d51641db95a26fd5fe334822fe6a6d2722c7a4f718a2e4738b" exitCode=0 Jan 27 14:52:18 crc kubenswrapper[4698]: I0127 14:52:18.379804 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerDied","Data":"3191ad124be801d51641db95a26fd5fe334822fe6a6d2722c7a4f718a2e4738b"} Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.439997 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.440597 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4t7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0f9b9cd1-a9b3-4764-a897-44de30ff90ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.533618 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.538919 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.539355 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.543195 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:20 crc kubenswrapper[4698]: E0127 14:52:20.543253 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.559966 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652590 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g27jp\" (UniqueName: \"kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652733 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652764 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652892 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.652942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653012 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653093 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653121 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653148 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653165 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c82e845e-ad64-4e92-a6e6-061b14781155\" (UID: \"c82e845e-ad64-4e92-a6e6-061b14781155\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653190 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653212 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2cpx\" (UniqueName: \"kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx\") pod \"8faab9a7-c335-4f18-8ea1-311b5c567b31\" (UID: \"8faab9a7-c335-4f18-8ea1-311b5c567b31\") " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653667 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.653909 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs" (OuterVolumeSpecName: "logs") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.654298 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.656249 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs" (OuterVolumeSpecName: "logs") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.658117 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx" (OuterVolumeSpecName: "kube-api-access-p2cpx") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "kube-api-access-p2cpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.678251 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts" (OuterVolumeSpecName: "scripts") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.686022 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp" (OuterVolumeSpecName: "kube-api-access-g27jp") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "kube-api-access-g27jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.691196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.693325 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.693385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts" (OuterVolumeSpecName: "scripts") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.710512 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755447 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755892 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755903 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755913 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2cpx\" (UniqueName: \"kubernetes.io/projected/8faab9a7-c335-4f18-8ea1-311b5c567b31-kube-api-access-p2cpx\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755925 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755935 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g27jp\" (UniqueName: \"kubernetes.io/projected/c82e845e-ad64-4e92-a6e6-061b14781155-kube-api-access-g27jp\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755945 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755954 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c82e845e-ad64-4e92-a6e6-061b14781155-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755963 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755972 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8faab9a7-c335-4f18-8ea1-311b5c567b31-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.755992 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.770529 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.803773 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.807737 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data" (OuterVolumeSpecName: "config-data") pod "8faab9a7-c335-4f18-8ea1-311b5c567b31" (UID: "8faab9a7-c335-4f18-8ea1-311b5c567b31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.828535 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.863260 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8faab9a7-c335-4f18-8ea1-311b5c567b31-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.863302 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.863317 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.863327 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.940290 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data" (OuterVolumeSpecName: "config-data") pod "c82e845e-ad64-4e92-a6e6-061b14781155" (UID: "c82e845e-ad64-4e92-a6e6-061b14781155"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.942776 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:52:20 crc kubenswrapper[4698]: I0127 14:52:20.969165 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82e845e-ad64-4e92-a6e6-061b14781155-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.010351 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.071706 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.071819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.071997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.072123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.072235 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.072332 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggjxq\" (UniqueName: \"kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq\") pod \"705a412d-9e58-46df-8084-cb719c4c5b57\" (UID: \"705a412d-9e58-46df-8084-cb719c4c5b57\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.093822 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq" (OuterVolumeSpecName: "kube-api-access-ggjxq") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "kube-api-access-ggjxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.167471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.177872 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data\") pod \"1cff3663-449e-40a4-890d-7eaff54a5973\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.177995 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle\") pod \"1cff3663-449e-40a4-890d-7eaff54a5973\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.178146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs\") pod \"1cff3663-449e-40a4-890d-7eaff54a5973\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.178207 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca\") pod \"1cff3663-449e-40a4-890d-7eaff54a5973\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.178269 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t58tn\" (UniqueName: \"kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn\") pod \"1cff3663-449e-40a4-890d-7eaff54a5973\" (UID: \"1cff3663-449e-40a4-890d-7eaff54a5973\") " Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.179020 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.179047 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggjxq\" (UniqueName: \"kubernetes.io/projected/705a412d-9e58-46df-8084-cb719c4c5b57-kube-api-access-ggjxq\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.182934 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.184482 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs" (OuterVolumeSpecName: "logs") pod "1cff3663-449e-40a4-890d-7eaff54a5973" (UID: "1cff3663-449e-40a4-890d-7eaff54a5973"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.185206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.187228 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn" (OuterVolumeSpecName: "kube-api-access-t58tn") pod "1cff3663-449e-40a4-890d-7eaff54a5973" (UID: "1cff3663-449e-40a4-890d-7eaff54a5973"). InnerVolumeSpecName "kube-api-access-t58tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.198109 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config" (OuterVolumeSpecName: "config") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.227952 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1cff3663-449e-40a4-890d-7eaff54a5973" (UID: "1cff3663-449e-40a4-890d-7eaff54a5973"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.243020 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "705a412d-9e58-46df-8084-cb719c4c5b57" (UID: "705a412d-9e58-46df-8084-cb719c4c5b57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.249980 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cff3663-449e-40a4-890d-7eaff54a5973" (UID: "1cff3663-449e-40a4-890d-7eaff54a5973"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.255241 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data" (OuterVolumeSpecName: "config-data") pod "1cff3663-449e-40a4-890d-7eaff54a5973" (UID: "1cff3663-449e-40a4-890d-7eaff54a5973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.281925 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.281969 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t58tn\" (UniqueName: \"kubernetes.io/projected/1cff3663-449e-40a4-890d-7eaff54a5973-kube-api-access-t58tn\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.281980 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.281991 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.282011 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.282024 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.282036 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff3663-449e-40a4-890d-7eaff54a5973-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.282047 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/705a412d-9e58-46df-8084-cb719c4c5b57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.282057 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cff3663-449e-40a4-890d-7eaff54a5973-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.436712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" event={"ID":"705a412d-9e58-46df-8084-cb719c4c5b57","Type":"ContainerDied","Data":"52f01d29426e07d377a81c4938b3e4f6ffb5d925af6e7135718691d4c10f5d1a"} Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.436775 4698 scope.go:117] "RemoveContainer" containerID="bfa7799b273e8188d1172c651671df1f5d51924a2d8d478434b2876fc6b90604" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.436927 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b755cc99f-wsqq5" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.445594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c82e845e-ad64-4e92-a6e6-061b14781155","Type":"ContainerDied","Data":"789327419b3afd15c2a93a5b4060b02d01fe7ba250c47aa5e3d7dcf08ff39a60"} Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.445762 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.457259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"1cff3663-449e-40a4-890d-7eaff54a5973","Type":"ContainerDied","Data":"10d1d4a923216a6759651750d5dcf3136a986e5d09c2776038dbc2f4d4507dc8"} Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.457360 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.473148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8faab9a7-c335-4f18-8ea1-311b5c567b31","Type":"ContainerDied","Data":"1ba28a3c5406a4205db3c9d43453530f2a1f2986398a515780d64639677e778d"} Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.473269 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.478616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9jhwb" event={"ID":"51ba2ef6-17ab-4974-a2c6-7f995343e24b","Type":"ContainerStarted","Data":"4df5c0943ab012c25673290e4c14b44b6814a4422d5308e388a20962116a9f96"} Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.504593 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9jhwb" podStartSLOduration=3.47869281 podStartE2EDuration="1m11.504570508s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="2026-01-27 14:51:12.600542714 +0000 UTC m=+1328.277320179" lastFinishedPulling="2026-01-27 14:52:20.626420412 +0000 UTC m=+1396.303197877" observedRunningTime="2026-01-27 14:52:21.497389849 +0000 UTC m=+1397.174167314" watchObservedRunningTime="2026-01-27 14:52:21.504570508 +0000 UTC m=+1397.181347973" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.542336 4698 scope.go:117] "RemoveContainer" containerID="842f5fc1fa155d5f971e7ba4f5aea68d2480f9ad3ab31cc84bae9534373fabfe" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.599657 4698 scope.go:117] "RemoveContainer" containerID="e6d3497714c0fd43e14ef722737597c644ef4f3849f3892ab90e0ee6f12c3a12" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.694695 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.698629 4698 scope.go:117] "RemoveContainer" containerID="cba21f280c9e6188102736ea92888cd8b94e57e2056c20c6a25399e517695a37" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.711684 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.738692 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.752093 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.761332 4698 scope.go:117] "RemoveContainer" containerID="3191ad124be801d51641db95a26fd5fe334822fe6a6d2722c7a4f718a2e4738b" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.769852 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770303 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770319 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770330 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770338 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770355 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770363 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770375 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770382 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770397 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770404 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770417 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770424 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770450 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="init" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770456 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="init" Jan 27 14:52:21 crc kubenswrapper[4698]: E0127 14:52:21.770467 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770473 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770622 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770700 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770717 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770729 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770745 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" containerName="dnsmasq-dns" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770756 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" containerName="glance-httpd" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.770767 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" containerName="watcher-api-log" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.771907 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.776028 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.776688 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qwq5p" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.777024 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.777193 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.780945 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.788115 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b755cc99f-wsqq5"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.794288 4698 scope.go:117] "RemoveContainer" containerID="391e9ed54be68bba95bc943d4b1749bdd3985fdd08ada5ed5eaeb7e44becb0fc" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.797275 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.815989 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.824393 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.830478 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.832566 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.840987 4698 scope.go:117] "RemoveContainer" containerID="7908c5f5bb51ca9e91009870ba79de3a33e05f345a1e6b0c2094ea01be292652" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.845585 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.870073 4698 scope.go:117] "RemoveContainer" containerID="cd084c36fc8374f3869ca9b11b23ec5905c44fc417b456d7cc76492044e2c15d" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.881409 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.890067 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895513 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895620 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6865d\" (UniqueName: \"kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895786 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.895811 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.898276 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.901087 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.903298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.903805 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.904256 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.915870 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlzn\" (UniqueName: \"kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997586 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997676 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmzg\" (UniqueName: \"kubernetes.io/projected/ac75c7a7-7556-4c40-bace-beafefc7a3cd-kube-api-access-qkmzg\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.997763 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac75c7a7-7556-4c40-bace-beafefc7a3cd-logs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998089 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998178 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998274 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-config-data\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6865d\" (UniqueName: \"kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998777 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998793 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998913 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998936 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998960 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.998980 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.999209 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 14:52:21 crc kubenswrapper[4698]: I0127 14:52:21.999893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.006210 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.006340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.006582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.014411 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.032364 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6865d\" (UniqueName: \"kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.033431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.100619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-config-data\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.100974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101235 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvlzn\" (UniqueName: \"kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101309 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmzg\" (UniqueName: \"kubernetes.io/projected/ac75c7a7-7556-4c40-bace-beafefc7a3cd-kube-api-access-qkmzg\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac75c7a7-7556-4c40-bace-beafefc7a3cd-logs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.101394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.103802 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.104288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.104281 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.104702 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.107116 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.118342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.118817 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac75c7a7-7556-4c40-bace-beafefc7a3cd-logs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.119503 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.120475 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-config-data\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.120584 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.122487 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.124051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.124444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.125804 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac75c7a7-7556-4c40-bace-beafefc7a3cd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.128426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvlzn\" (UniqueName: \"kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.129035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmzg\" (UniqueName: \"kubernetes.io/projected/ac75c7a7-7556-4c40-bace-beafefc7a3cd-kube-api-access-qkmzg\") pod \"watcher-api-0\" (UID: \"ac75c7a7-7556-4c40-bace-beafefc7a3cd\") " pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.150903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.172949 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.223847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.451793 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.502278 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcnmn" event={"ID":"74946770-13e5-4777-a645-bb6bee73c277","Type":"ContainerStarted","Data":"52df319c8fd5806e1b6d043e0c56391797aa95b270b1a3ecdf734c7dec22e5f1"} Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.533541 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-mcnmn" podStartSLOduration=4.529694368 podStartE2EDuration="1m12.533517945s" podCreationTimestamp="2026-01-27 14:51:10 +0000 UTC" firstStartedPulling="2026-01-27 14:51:12.621019963 +0000 UTC m=+1328.297797428" lastFinishedPulling="2026-01-27 14:52:20.62484354 +0000 UTC m=+1396.301621005" observedRunningTime="2026-01-27 14:52:22.527462585 +0000 UTC m=+1398.204240050" watchObservedRunningTime="2026-01-27 14:52:22.533517945 +0000 UTC m=+1398.210295410" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.538503 4698 generic.go:334] "Generic (PLEG): container finished" podID="2fcd6ee1-8d2c-490d-8fd4-b582c497f336" containerID="d058386674e08a8c4f0250d995ae1b6ee9fdffdc7441edd08de85c6565e35ff7" exitCode=0 Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.538574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pr5rl" event={"ID":"2fcd6ee1-8d2c-490d-8fd4-b582c497f336","Type":"ContainerDied","Data":"d058386674e08a8c4f0250d995ae1b6ee9fdffdc7441edd08de85c6565e35ff7"} Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.621684 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.723582 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:52:22 crc kubenswrapper[4698]: I0127 14:52:22.734773 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.006301 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cff3663-449e-40a4-890d-7eaff54a5973" path="/var/lib/kubelet/pods/1cff3663-449e-40a4-890d-7eaff54a5973/volumes" Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.007380 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705a412d-9e58-46df-8084-cb719c4c5b57" path="/var/lib/kubelet/pods/705a412d-9e58-46df-8084-cb719c4c5b57/volumes" Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.008200 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8faab9a7-c335-4f18-8ea1-311b5c567b31" path="/var/lib/kubelet/pods/8faab9a7-c335-4f18-8ea1-311b5c567b31/volumes" Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.009593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82e845e-ad64-4e92-a6e6-061b14781155" path="/var/lib/kubelet/pods/c82e845e-ad64-4e92-a6e6-061b14781155/volumes" Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.088224 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.575227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerStarted","Data":"ff3a58ca7cb001f14463c65798e61aef3a2184b8258dfd78cc9e3ae37d01b336"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.584514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerStarted","Data":"f475ddfd2dced71257c2298994783a72cc02be3fe5f424507a519767978147be"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.584575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerStarted","Data":"0b14c322ec290ef2f535f17e195aa4767fef98eeb591ac098f93e527686d1cf8"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.594766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ac75c7a7-7556-4c40-bace-beafefc7a3cd","Type":"ContainerStarted","Data":"332fd69df7080205be165ebc0d022d791229f704e908debfa02a8f5db7a4a3a4"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.594814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ac75c7a7-7556-4c40-bace-beafefc7a3cd","Type":"ContainerStarted","Data":"afd1791ab509f38b7dafb81c965aa86d01486c8f87da133a176d4b5a09f9069f"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.594830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ac75c7a7-7556-4c40-bace-beafefc7a3cd","Type":"ContainerStarted","Data":"80242fb032edb8711ec10d247b5b3ccdd55a121eba98c076764921dd426fff56"} Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.595073 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:52:23 crc kubenswrapper[4698]: I0127 14:52:23.625772 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.6257354680000002 podStartE2EDuration="2.625735468s" podCreationTimestamp="2026-01-27 14:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:23.624813594 +0000 UTC m=+1399.301591079" watchObservedRunningTime="2026-01-27 14:52:23.625735468 +0000 UTC m=+1399.302512943" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.150025 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177017 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177189 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnwhb\" (UniqueName: \"kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177277 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177338 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.177443 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle\") pod \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\" (UID: \"2fcd6ee1-8d2c-490d-8fd4-b582c497f336\") " Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.186891 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb" (OuterVolumeSpecName: "kube-api-access-mnwhb") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "kube-api-access-mnwhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.188491 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.192879 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts" (OuterVolumeSpecName: "scripts") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.197824 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.214334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.223423 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data" (OuterVolumeSpecName: "config-data") pod "2fcd6ee1-8d2c-490d-8fd4-b582c497f336" (UID: "2fcd6ee1-8d2c-490d-8fd4-b582c497f336"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279059 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279092 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279101 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279109 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnwhb\" (UniqueName: \"kubernetes.io/projected/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-kube-api-access-mnwhb\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279120 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.279129 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcd6ee1-8d2c-490d-8fd4-b582c497f336-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.610735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pr5rl" event={"ID":"2fcd6ee1-8d2c-490d-8fd4-b582c497f336","Type":"ContainerDied","Data":"44ddfbf7295a96c64ea12bbd5051b08bb0a448ad0c444b3ff9d42a706ffa5cd7"} Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.611008 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ddfbf7295a96c64ea12bbd5051b08bb0a448ad0c444b3ff9d42a706ffa5cd7" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.610738 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pr5rl" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.613792 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerStarted","Data":"4cbd31462283703c3ca2ab8011b320af50638594665ca991f17b3cc1e3f582b5"} Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.617368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerStarted","Data":"12a57213ca737ce5381887b439702260fc2192b36b4211eace579e843e648443"} Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.656323 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.656240767 podStartE2EDuration="3.656240767s" podCreationTimestamp="2026-01-27 14:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:24.645207716 +0000 UTC m=+1400.321985181" watchObservedRunningTime="2026-01-27 14:52:24.656240767 +0000 UTC m=+1400.333018252" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.875603 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-646ddbcdff-wcvmz"] Jan 27 14:52:24 crc kubenswrapper[4698]: E0127 14:52:24.876144 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcd6ee1-8d2c-490d-8fd4-b582c497f336" containerName="keystone-bootstrap" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.876162 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcd6ee1-8d2c-490d-8fd4-b582c497f336" containerName="keystone-bootstrap" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.876415 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcd6ee1-8d2c-490d-8fd4-b582c497f336" containerName="keystone-bootstrap" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.877305 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.884908 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.885128 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.885220 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.885304 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.885400 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nk52t" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.885848 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.898087 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-54778bbf88-5qkzn" Jan 27 14:52:24 crc kubenswrapper[4698]: I0127 14:52:24.916506 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-646ddbcdff-wcvmz"] Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.022711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp57v\" (UniqueName: \"kubernetes.io/projected/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-kube-api-access-pp57v\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.023155 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-fernet-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.023295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-credential-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.023537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-public-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.024305 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-config-data\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.024366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-combined-ca-bundle\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.024398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-internal-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.024519 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-scripts\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.053545 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.053886 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon-log" containerID="cri-o://1dd9e638670767afe147060b8b877b23827741ca99584d820b8e17d9715e1c86" gracePeriod=30 Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.054205 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" containerID="cri-o://5854850d7e012ab315ae8c17136e56826d530bb56bfd3db7e41a47ebec3633a1" gracePeriod=30 Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.059289 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.127681 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-scripts\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.128312 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp57v\" (UniqueName: \"kubernetes.io/projected/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-kube-api-access-pp57v\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.128560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-fernet-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.128800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-credential-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.128923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-public-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.129058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-config-data\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.129183 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-combined-ca-bundle\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.129306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-internal-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.137967 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-scripts\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.142955 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-config-data\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.144207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-credential-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.144317 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-internal-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.145248 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-public-tls-certs\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.160812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp57v\" (UniqueName: \"kubernetes.io/projected/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-kube-api-access-pp57v\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.161969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-combined-ca-bundle\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.167272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b7d39fa-1512-4cbd-a3b0-1169b55e8e61-fernet-keys\") pod \"keystone-646ddbcdff-wcvmz\" (UID: \"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61\") " pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.219335 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:25 crc kubenswrapper[4698]: E0127 14:52:25.521516 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:25 crc kubenswrapper[4698]: E0127 14:52:25.539001 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:25 crc kubenswrapper[4698]: E0127 14:52:25.546282 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:25 crc kubenswrapper[4698]: E0127 14:52:25.546394 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.633318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerStarted","Data":"89350b3dedb5252f10ece7b4309f2e1e45131485a2dac9829d72c744e659f269"} Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.633652 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.671682 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.671663088 podStartE2EDuration="4.671663088s" podCreationTimestamp="2026-01-27 14:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:25.662825445 +0000 UTC m=+1401.339602900" watchObservedRunningTime="2026-01-27 14:52:25.671663088 +0000 UTC m=+1401.348440563" Jan 27 14:52:25 crc kubenswrapper[4698]: I0127 14:52:25.837451 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-646ddbcdff-wcvmz"] Jan 27 14:52:25 crc kubenswrapper[4698]: W0127 14:52:25.845229 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7d39fa_1512_4cbd_a3b0_1169b55e8e61.slice/crio-6851b4150495e66102b8c955daf449f099d85648a2062f0946d2ea60cb763cab WatchSource:0}: Error finding container 6851b4150495e66102b8c955daf449f099d85648a2062f0946d2ea60cb763cab: Status 404 returned error can't find the container with id 6851b4150495e66102b8c955daf449f099d85648a2062f0946d2ea60cb763cab Jan 27 14:52:26 crc kubenswrapper[4698]: I0127 14:52:26.650134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-646ddbcdff-wcvmz" event={"ID":"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61","Type":"ContainerStarted","Data":"7ccd0278fe27b09cafa81380069748e0b2fb4686ebc14f09992a03d3cf9e3896"} Jan 27 14:52:26 crc kubenswrapper[4698]: I0127 14:52:26.650559 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-646ddbcdff-wcvmz" event={"ID":"6b7d39fa-1512-4cbd-a3b0-1169b55e8e61","Type":"ContainerStarted","Data":"6851b4150495e66102b8c955daf449f099d85648a2062f0946d2ea60cb763cab"} Jan 27 14:52:26 crc kubenswrapper[4698]: I0127 14:52:26.650589 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:26 crc kubenswrapper[4698]: I0127 14:52:26.965251 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.005246 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-646ddbcdff-wcvmz" podStartSLOduration=3.005219756 podStartE2EDuration="3.005219756s" podCreationTimestamp="2026-01-27 14:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:26.681137562 +0000 UTC m=+1402.357915037" watchObservedRunningTime="2026-01-27 14:52:27.005219756 +0000 UTC m=+1402.681997221" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.175454 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56976->10.217.0.161:8443: read: connection reset by peer" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.224021 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.451730 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.451810 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.451866 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.452743 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.452810 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8" gracePeriod=600 Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.704900 4698 generic.go:334] "Generic (PLEG): container finished" podID="2573c021-b642-4659-a97b-8c06bcf54afc" containerID="5854850d7e012ab315ae8c17136e56826d530bb56bfd3db7e41a47ebec3633a1" exitCode=0 Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.705021 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerDied","Data":"5854850d7e012ab315ae8c17136e56826d530bb56bfd3db7e41a47ebec3633a1"} Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.754452 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8" exitCode=0 Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.755290 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8"} Jan 27 14:52:27 crc kubenswrapper[4698]: I0127 14:52:27.755327 4698 scope.go:117] "RemoveContainer" containerID="12b905ab61ac76551a3e2b33bba7698de71a27292af8be5d463cd0b69aa90d97" Jan 27 14:52:29 crc kubenswrapper[4698]: I0127 14:52:29.656200 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 27 14:52:30 crc kubenswrapper[4698]: E0127 14:52:30.519875 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:30 crc kubenswrapper[4698]: E0127 14:52:30.521377 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:30 crc kubenswrapper[4698]: E0127 14:52:30.522512 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:30 crc kubenswrapper[4698]: E0127 14:52:30.522549 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.105080 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.105621 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.142391 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.149717 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.224904 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.233973 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.452664 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.452757 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.489419 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.502306 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.805349 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.805409 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.805429 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.805444 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:52:32 crc kubenswrapper[4698]: I0127 14:52:32.817168 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.285817 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.286405 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.361529 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.361618 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.374443 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.404109 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:52:35 crc kubenswrapper[4698]: E0127 14:52:35.528526 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:35 crc kubenswrapper[4698]: E0127 14:52:35.537068 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:35 crc kubenswrapper[4698]: E0127 14:52:35.538941 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:35 crc kubenswrapper[4698]: E0127 14:52:35.539039 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:35 crc kubenswrapper[4698]: E0127 14:52:35.648898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.840816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerStarted","Data":"00e4c16573055bab76d4523545729bbfd1c915b91d3c2b0465f0fc5d1b8e3b12"} Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.840884 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="ceilometer-notification-agent" containerID="cri-o://5e7f8b680f5f6e9b4074e4abd90191a37019e5bbf518e722493554092019665c" gracePeriod=30 Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.840923 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.840967 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="proxy-httpd" containerID="cri-o://00e4c16573055bab76d4523545729bbfd1c915b91d3c2b0465f0fc5d1b8e3b12" gracePeriod=30 Jan 27 14:52:35 crc kubenswrapper[4698]: I0127 14:52:35.846463 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16"} Jan 27 14:52:36 crc kubenswrapper[4698]: I0127 14:52:36.857173 4698 generic.go:334] "Generic (PLEG): container finished" podID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerID="00e4c16573055bab76d4523545729bbfd1c915b91d3c2b0465f0fc5d1b8e3b12" exitCode=0 Jan 27 14:52:36 crc kubenswrapper[4698]: I0127 14:52:36.857750 4698 generic.go:334] "Generic (PLEG): container finished" podID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerID="5e7f8b680f5f6e9b4074e4abd90191a37019e5bbf518e722493554092019665c" exitCode=0 Jan 27 14:52:36 crc kubenswrapper[4698]: I0127 14:52:36.857244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerDied","Data":"00e4c16573055bab76d4523545729bbfd1c915b91d3c2b0465f0fc5d1b8e3b12"} Jan 27 14:52:36 crc kubenswrapper[4698]: I0127 14:52:36.857858 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerDied","Data":"5e7f8b680f5f6e9b4074e4abd90191a37019e5bbf518e722493554092019665c"} Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.202774 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.276822 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.276882 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.276902 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.276997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4t7x\" (UniqueName: \"kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.277033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.277152 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.277206 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml\") pod \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\" (UID: \"0f9b9cd1-a9b3-4764-a897-44de30ff90ac\") " Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.277384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.277937 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.278078 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.283416 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.283509 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x" (OuterVolumeSpecName: "kube-api-access-r4t7x") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "kube-api-access-r4t7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.284735 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts" (OuterVolumeSpecName: "scripts") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.346674 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.379208 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.379244 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.379256 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4t7x\" (UniqueName: \"kubernetes.io/projected/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-kube-api-access-r4t7x\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.379266 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.379279 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.380086 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data" (OuterVolumeSpecName: "config-data") pod "0f9b9cd1-a9b3-4764-a897-44de30ff90ac" (UID: "0f9b9cd1-a9b3-4764-a897-44de30ff90ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.481172 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9b9cd1-a9b3-4764-a897-44de30ff90ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.872586 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f9b9cd1-a9b3-4764-a897-44de30ff90ac","Type":"ContainerDied","Data":"7b65f87dabef4b0e82ba6792ba6bad3f280136dd11fec634e34eb7578fac132c"} Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.872702 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.873034 4698 scope.go:117] "RemoveContainer" containerID="00e4c16573055bab76d4523545729bbfd1c915b91d3c2b0465f0fc5d1b8e3b12" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.895419 4698 scope.go:117] "RemoveContainer" containerID="5e7f8b680f5f6e9b4074e4abd90191a37019e5bbf518e722493554092019665c" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.935294 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.943491 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.954804 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:37 crc kubenswrapper[4698]: E0127 14:52:37.955150 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="ceilometer-notification-agent" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.955168 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="ceilometer-notification-agent" Jan 27 14:52:37 crc kubenswrapper[4698]: E0127 14:52:37.955196 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="proxy-httpd" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.955202 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="proxy-httpd" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.955360 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="proxy-httpd" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.955381 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" containerName="ceilometer-notification-agent" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.957041 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.966240 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:52:37 crc kubenswrapper[4698]: I0127 14:52:37.966926 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:37.969899 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.086369 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:38 crc kubenswrapper[4698]: E0127 14:52:38.087149 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-w464n log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097092 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w464n\" (UniqueName: \"kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097131 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.097373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199087 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199244 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199328 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w464n\" (UniqueName: \"kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199859 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.199860 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.204189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.205028 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.205253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.205962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.221871 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w464n\" (UniqueName: \"kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n\") pod \"ceilometer-0\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.884132 4698 generic.go:334] "Generic (PLEG): container finished" podID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" containerID="bb8fb01ad6c77ac6cc30475378c33705f30a11e02a8606e8f6ff5395462bd2fb" exitCode=1 Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.884270 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c198be7c-95a9-47ea-80fd-252e5d8d9ac9","Type":"ContainerDied","Data":"bb8fb01ad6c77ac6cc30475378c33705f30a11e02a8606e8f6ff5395462bd2fb"} Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.886370 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:38 crc kubenswrapper[4698]: I0127 14:52:38.896534 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.005096 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9b9cd1-a9b3-4764-a897-44de30ff90ac" path="/var/lib/kubelet/pods/0f9b9cd1-a9b3-4764-a897-44de30ff90ac/volumes" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012366 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012435 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w464n\" (UniqueName: \"kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012592 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012629 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012685 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.012720 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd\") pod \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\" (UID: \"6a7967f1-2f97-4e12-8ebc-48b2b6cdc845\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.013322 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.013424 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.017099 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts" (OuterVolumeSpecName: "scripts") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.017554 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n" (OuterVolumeSpecName: "kube-api-access-w464n") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "kube-api-access-w464n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.018367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data" (OuterVolumeSpecName: "config-data") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.019962 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.022371 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" (UID: "6a7967f1-2f97-4e12-8ebc-48b2b6cdc845"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114554 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114585 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w464n\" (UniqueName: \"kubernetes.io/projected/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-kube-api-access-w464n\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114595 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114603 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114612 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114620 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.114628 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.348130 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.421098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle\") pod \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.421289 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca\") pod \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.421336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs\") pod \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.421433 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data\") pod \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.421461 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbvlv\" (UniqueName: \"kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv\") pod \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\" (UID: \"c198be7c-95a9-47ea-80fd-252e5d8d9ac9\") " Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.422143 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs" (OuterVolumeSpecName: "logs") pod "c198be7c-95a9-47ea-80fd-252e5d8d9ac9" (UID: "c198be7c-95a9-47ea-80fd-252e5d8d9ac9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.426086 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv" (OuterVolumeSpecName: "kube-api-access-jbvlv") pod "c198be7c-95a9-47ea-80fd-252e5d8d9ac9" (UID: "c198be7c-95a9-47ea-80fd-252e5d8d9ac9"). InnerVolumeSpecName "kube-api-access-jbvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.465402 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c198be7c-95a9-47ea-80fd-252e5d8d9ac9" (UID: "c198be7c-95a9-47ea-80fd-252e5d8d9ac9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.484793 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c198be7c-95a9-47ea-80fd-252e5d8d9ac9" (UID: "c198be7c-95a9-47ea-80fd-252e5d8d9ac9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.484806 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data" (OuterVolumeSpecName: "config-data") pod "c198be7c-95a9-47ea-80fd-252e5d8d9ac9" (UID: "c198be7c-95a9-47ea-80fd-252e5d8d9ac9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.523097 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.523138 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbvlv\" (UniqueName: \"kubernetes.io/projected/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-kube-api-access-jbvlv\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.523150 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.523159 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.523167 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c198be7c-95a9-47ea-80fd-252e5d8d9ac9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.655403 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.896902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.897917 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.897934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c198be7c-95a9-47ea-80fd-252e5d8d9ac9","Type":"ContainerDied","Data":"9949729f6f51087c1dae0d7a0e0a63a5f2f5f12d1834f8685a3963bdd9cff3ea"} Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.898175 4698 scope.go:117] "RemoveContainer" containerID="bb8fb01ad6c77ac6cc30475378c33705f30a11e02a8606e8f6ff5395462bd2fb" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.948763 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.963184 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.971040 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.981008 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.990310 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:39 crc kubenswrapper[4698]: E0127 14:52:39.990803 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" containerName="watcher-decision-engine" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.990830 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" containerName="watcher-decision-engine" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.996035 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" containerName="watcher-decision-engine" Jan 27 14:52:39 crc kubenswrapper[4698]: I0127 14:52:39.999593 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.013271 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.014852 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.020379 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.029192 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.033191 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.042860 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.053656 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145300 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dngt\" (UniqueName: \"kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145377 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvtv\" (UniqueName: \"kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145512 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145582 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145650 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145687 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145716 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145739 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.145768 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.247519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.247897 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.247929 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvtv\" (UniqueName: \"kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248060 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248130 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.248390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dngt\" (UniqueName: \"kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.250683 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.250736 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.251096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.253888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.255312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.258123 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.258434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.259073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.259898 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.262905 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.268265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dngt\" (UniqueName: \"kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt\") pod \"watcher-decision-engine-0\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.270837 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvtv\" (UniqueName: \"kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv\") pod \"ceilometer-0\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.398404 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.406197 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:40 crc kubenswrapper[4698]: E0127 14:52:40.524393 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:40 crc kubenswrapper[4698]: E0127 14:52:40.529592 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:40 crc kubenswrapper[4698]: E0127 14:52:40.531098 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 14:52:40 crc kubenswrapper[4698]: E0127 14:52:40.531145 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.866878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.940926 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:52:40 crc kubenswrapper[4698]: W0127 14:52:40.974346 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa09210f_2d05_4dc9_bf03_3af614176a09.slice/crio-f4aee7d0377a83a7f0085cf031e2c8a10942389e73769bba15022bcef67cdad4 WatchSource:0}: Error finding container f4aee7d0377a83a7f0085cf031e2c8a10942389e73769bba15022bcef67cdad4: Status 404 returned error can't find the container with id f4aee7d0377a83a7f0085cf031e2c8a10942389e73769bba15022bcef67cdad4 Jan 27 14:52:40 crc kubenswrapper[4698]: W0127 14:52:40.975129 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4129eb47_beba_4bec_8cb2_59818e8908a5.slice/crio-febed9f710b1af49f4e1169ba30db878ee2b78ad32a40f722e4c0e0ad74c0cb0 WatchSource:0}: Error finding container febed9f710b1af49f4e1169ba30db878ee2b78ad32a40f722e4c0e0ad74c0cb0: Status 404 returned error can't find the container with id febed9f710b1af49f4e1169ba30db878ee2b78ad32a40f722e4c0e0ad74c0cb0 Jan 27 14:52:40 crc kubenswrapper[4698]: I0127 14:52:40.977030 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.006075 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7967f1-2f97-4e12-8ebc-48b2b6cdc845" path="/var/lib/kubelet/pods/6a7967f1-2f97-4e12-8ebc-48b2b6cdc845/volumes" Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.006694 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c198be7c-95a9-47ea-80fd-252e5d8d9ac9" path="/var/lib/kubelet/pods/c198be7c-95a9-47ea-80fd-252e5d8d9ac9/volumes" Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.919132 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerStarted","Data":"1ab95197cc99fbdad3e726bc7b972c4c3e64739f16d6449344c8acb9f11caf99"} Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.919700 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerStarted","Data":"f4aee7d0377a83a7f0085cf031e2c8a10942389e73769bba15022bcef67cdad4"} Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.921330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerStarted","Data":"32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966"} Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.921366 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerStarted","Data":"febed9f710b1af49f4e1169ba30db878ee2b78ad32a40f722e4c0e0ad74c0cb0"} Jan 27 14:52:41 crc kubenswrapper[4698]: I0127 14:52:41.939354 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.939330403 podStartE2EDuration="2.939330403s" podCreationTimestamp="2026-01-27 14:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:41.936610082 +0000 UTC m=+1417.613387567" watchObservedRunningTime="2026-01-27 14:52:41.939330403 +0000 UTC m=+1417.616107868" Jan 27 14:52:42 crc kubenswrapper[4698]: I0127 14:52:42.933443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerStarted","Data":"6804ba764d4a6d87d037f1c26220e544d6cfd46db87688ebffb08cf729e0c954"} Jan 27 14:52:43 crc kubenswrapper[4698]: I0127 14:52:43.942911 4698 generic.go:334] "Generic (PLEG): container finished" podID="445b01d2-0375-432b-808d-4045eb66c5da" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" exitCode=137 Jan 27 14:52:43 crc kubenswrapper[4698]: I0127 14:52:43.942965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"445b01d2-0375-432b-808d-4045eb66c5da","Type":"ContainerDied","Data":"f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41"} Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.213599 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.364312 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle\") pod \"445b01d2-0375-432b-808d-4045eb66c5da\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.364465 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q25c\" (UniqueName: \"kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c\") pod \"445b01d2-0375-432b-808d-4045eb66c5da\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.364551 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs\") pod \"445b01d2-0375-432b-808d-4045eb66c5da\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.364666 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data\") pod \"445b01d2-0375-432b-808d-4045eb66c5da\" (UID: \"445b01d2-0375-432b-808d-4045eb66c5da\") " Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.365001 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs" (OuterVolumeSpecName: "logs") pod "445b01d2-0375-432b-808d-4045eb66c5da" (UID: "445b01d2-0375-432b-808d-4045eb66c5da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.365496 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/445b01d2-0375-432b-808d-4045eb66c5da-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.375162 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c" (OuterVolumeSpecName: "kube-api-access-9q25c") pod "445b01d2-0375-432b-808d-4045eb66c5da" (UID: "445b01d2-0375-432b-808d-4045eb66c5da"). InnerVolumeSpecName "kube-api-access-9q25c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.405808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "445b01d2-0375-432b-808d-4045eb66c5da" (UID: "445b01d2-0375-432b-808d-4045eb66c5da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.423278 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data" (OuterVolumeSpecName: "config-data") pod "445b01d2-0375-432b-808d-4045eb66c5da" (UID: "445b01d2-0375-432b-808d-4045eb66c5da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.467716 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.467758 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445b01d2-0375-432b-808d-4045eb66c5da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.467769 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q25c\" (UniqueName: \"kubernetes.io/projected/445b01d2-0375-432b-808d-4045eb66c5da-kube-api-access-9q25c\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.964760 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.964760 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"445b01d2-0375-432b-808d-4045eb66c5da","Type":"ContainerDied","Data":"841222290a6ca76eac4d640c3e716440677664559bb3ffaff83d6396bf871a3d"} Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.964908 4698 scope.go:117] "RemoveContainer" containerID="f8a496dcbb962b06ce3c47831eb0fbff31387dd518458126b5ac2bd700893b41" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.971900 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerStarted","Data":"9939565094b72088e31edaea1834ee5f03f5bc9ad71838ac70b839170591e4d6"} Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.992293 4698 generic.go:334] "Generic (PLEG): container finished" podID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerID="32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966" exitCode=1 Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.992386 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerDied","Data":"32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966"} Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.993230 4698 scope.go:117] "RemoveContainer" containerID="32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966" Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.996906 4698 generic.go:334] "Generic (PLEG): container finished" podID="4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" containerID="04b54089d35cbda06ca5e8923f174f55591e3add4e7eb6362a6681256322cf0b" exitCode=0 Jan 27 14:52:45 crc kubenswrapper[4698]: I0127 14:52:45.996949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s4fks" event={"ID":"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe","Type":"ContainerDied","Data":"04b54089d35cbda06ca5e8923f174f55591e3add4e7eb6362a6681256322cf0b"} Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.009546 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.025007 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:46 crc kubenswrapper[4698]: E0127 14:52:46.038723 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4129eb47_beba_4bec_8cb2_59818e8908a5.slice/crio-conmon-32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.066540 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:46 crc kubenswrapper[4698]: E0127 14:52:46.067019 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.067041 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.069022 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="445b01d2-0375-432b-808d-4045eb66c5da" containerName="watcher-applier" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.070248 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.081593 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.118803 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.189288 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-config-data\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.189357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3bcf72a-4e77-4609-9796-a712514b59de-logs\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.189392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8qr\" (UniqueName: \"kubernetes.io/projected/d3bcf72a-4e77-4609-9796-a712514b59de-kube-api-access-fk8qr\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.189741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.291924 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.292052 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-config-data\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.292082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3bcf72a-4e77-4609-9796-a712514b59de-logs\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.292105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8qr\" (UniqueName: \"kubernetes.io/projected/d3bcf72a-4e77-4609-9796-a712514b59de-kube-api-access-fk8qr\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.293302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3bcf72a-4e77-4609-9796-a712514b59de-logs\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.298087 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.298354 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bcf72a-4e77-4609-9796-a712514b59de-config-data\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.309558 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8qr\" (UniqueName: \"kubernetes.io/projected/d3bcf72a-4e77-4609-9796-a712514b59de-kube-api-access-fk8qr\") pod \"watcher-applier-0\" (UID: \"d3bcf72a-4e77-4609-9796-a712514b59de\") " pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.477278 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:52:46 crc kubenswrapper[4698]: I0127 14:52:46.915756 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:52:46 crc kubenswrapper[4698]: W0127 14:52:46.921908 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3bcf72a_4e77_4609_9796_a712514b59de.slice/crio-ff6d9f29cb052b99f867d413eff6af44a0994bacc9806b3f4d618aa1a4d0320f WatchSource:0}: Error finding container ff6d9f29cb052b99f867d413eff6af44a0994bacc9806b3f4d618aa1a4d0320f: Status 404 returned error can't find the container with id ff6d9f29cb052b99f867d413eff6af44a0994bacc9806b3f4d618aa1a4d0320f Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.003250 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445b01d2-0375-432b-808d-4045eb66c5da" path="/var/lib/kubelet/pods/445b01d2-0375-432b-808d-4045eb66c5da/volumes" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.008495 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d3bcf72a-4e77-4609-9796-a712514b59de","Type":"ContainerStarted","Data":"ff6d9f29cb052b99f867d413eff6af44a0994bacc9806b3f4d618aa1a4d0320f"} Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.013315 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerStarted","Data":"e180fc33b37136a8beea775412a972cdd73e909916fe36ab7f07411cf2f93635"} Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.013375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.016172 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerStarted","Data":"8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272"} Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.069706 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.462890108 podStartE2EDuration="8.069687391s" podCreationTimestamp="2026-01-27 14:52:39 +0000 UTC" firstStartedPulling="2026-01-27 14:52:40.976692773 +0000 UTC m=+1416.653470238" lastFinishedPulling="2026-01-27 14:52:46.583490056 +0000 UTC m=+1422.260267521" observedRunningTime="2026-01-27 14:52:47.03321578 +0000 UTC m=+1422.709993245" watchObservedRunningTime="2026-01-27 14:52:47.069687391 +0000 UTC m=+1422.746464856" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.413365 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s4fks" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.520700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle\") pod \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.520843 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pdpx\" (UniqueName: \"kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx\") pod \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.520878 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs\") pod \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.520925 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data\") pod \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.520952 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts\") pod \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\" (UID: \"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe\") " Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.521660 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs" (OuterVolumeSpecName: "logs") pod "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" (UID: "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.526102 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts" (OuterVolumeSpecName: "scripts") pod "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" (UID: "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.534520 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx" (OuterVolumeSpecName: "kube-api-access-8pdpx") pod "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" (UID: "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe"). InnerVolumeSpecName "kube-api-access-8pdpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.546960 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" (UID: "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.559263 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data" (OuterVolumeSpecName: "config-data") pod "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" (UID: "4e01edb9-1cd8-4c9a-a602-d35ff30d64fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.623182 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.623217 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.623226 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.623237 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pdpx\" (UniqueName: \"kubernetes.io/projected/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-kube-api-access-8pdpx\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:47 crc kubenswrapper[4698]: I0127 14:52:47.623268 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.029124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d3bcf72a-4e77-4609-9796-a712514b59de","Type":"ContainerStarted","Data":"f8a2a85d9cbf6b837f272483797090b72afe3453baafe880652638f25843e5d6"} Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.036880 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s4fks" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.037539 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s4fks" event={"ID":"4e01edb9-1cd8-4c9a-a602-d35ff30d64fe","Type":"ContainerDied","Data":"6345d311ff16ce6000a6bb77ba4981b402f2d944d3e6417f4b68c67386614771"} Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.038468 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6345d311ff16ce6000a6bb77ba4981b402f2d944d3e6417f4b68c67386614771" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.062630 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.062604339 podStartE2EDuration="2.062604339s" podCreationTimestamp="2026-01-27 14:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:48.051034474 +0000 UTC m=+1423.727811969" watchObservedRunningTime="2026-01-27 14:52:48.062604339 +0000 UTC m=+1423.739381824" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.213585 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79c59487f6-d4xj7"] Jan 27 14:52:48 crc kubenswrapper[4698]: E0127 14:52:48.214092 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" containerName="placement-db-sync" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.214116 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" containerName="placement-db-sync" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.214391 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" containerName="placement-db-sync" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.215828 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.220332 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.220625 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.221682 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.222122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vpn8j" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.222494 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.237061 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c59487f6-d4xj7"] Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338190 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda38994-c355-459e-af24-3fb060e62625-logs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338244 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-scripts\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-config-data\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338435 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8b6b\" (UniqueName: \"kubernetes.io/projected/cda38994-c355-459e-af24-3fb060e62625-kube-api-access-s8b6b\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338532 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-combined-ca-bundle\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-internal-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.338768 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-public-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.440700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-config-data\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.440841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8b6b\" (UniqueName: \"kubernetes.io/projected/cda38994-c355-459e-af24-3fb060e62625-kube-api-access-s8b6b\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.440873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-internal-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.440899 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-combined-ca-bundle\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.440960 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-public-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.441028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda38994-c355-459e-af24-3fb060e62625-logs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.441062 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-scripts\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.441519 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda38994-c355-459e-af24-3fb060e62625-logs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.444769 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-public-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.445651 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-config-data\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.446021 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-combined-ca-bundle\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.452710 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-internal-tls-certs\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.454266 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda38994-c355-459e-af24-3fb060e62625-scripts\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.478924 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8b6b\" (UniqueName: \"kubernetes.io/projected/cda38994-c355-459e-af24-3fb060e62625-kube-api-access-s8b6b\") pod \"placement-79c59487f6-d4xj7\" (UID: \"cda38994-c355-459e-af24-3fb060e62625\") " pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:48 crc kubenswrapper[4698]: I0127 14:52:48.541985 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:49 crc kubenswrapper[4698]: I0127 14:52:49.049700 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c59487f6-d4xj7"] Jan 27 14:52:49 crc kubenswrapper[4698]: I0127 14:52:49.655436 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58b77c584b-9tl65" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 27 14:52:50 crc kubenswrapper[4698]: I0127 14:52:50.069964 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c59487f6-d4xj7" event={"ID":"cda38994-c355-459e-af24-3fb060e62625","Type":"ContainerStarted","Data":"b94809c56f183c39b2aaba081a05c153a39dccd2b18ca42a6859e6e1517493fa"} Jan 27 14:52:50 crc kubenswrapper[4698]: I0127 14:52:50.406790 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:50 crc kubenswrapper[4698]: I0127 14:52:50.443738 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.086965 4698 generic.go:334] "Generic (PLEG): container finished" podID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" exitCode=1 Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.087150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerDied","Data":"8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272"} Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.087626 4698 scope.go:117] "RemoveContainer" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.088170 4698 scope.go:117] "RemoveContainer" containerID="32e27f8ccc73a467b421a07dc215a6afe98e8a7df744f5251d227a4c0f791966" Jan 27 14:52:51 crc kubenswrapper[4698]: E0127 14:52:51.088558 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.093412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c59487f6-d4xj7" event={"ID":"cda38994-c355-459e-af24-3fb060e62625","Type":"ContainerStarted","Data":"6b5c9e850971fd958a0f04ecc1859ee9748a62293fb6b4b78b548f29d634c3a5"} Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.093476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c59487f6-d4xj7" event={"ID":"cda38994-c355-459e-af24-3fb060e62625","Type":"ContainerStarted","Data":"98634bf9d21e9eaa1efee53c175b79d7d4511a59435571c2594e879642e7ab08"} Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.093757 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.093814 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.141784 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79c59487f6-d4xj7" podStartSLOduration=3.141759138 podStartE2EDuration="3.141759138s" podCreationTimestamp="2026-01-27 14:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:52:51.130747467 +0000 UTC m=+1426.807524952" watchObservedRunningTime="2026-01-27 14:52:51.141759138 +0000 UTC m=+1426.818536623" Jan 27 14:52:51 crc kubenswrapper[4698]: I0127 14:52:51.478456 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 27 14:52:52 crc kubenswrapper[4698]: I0127 14:52:52.106371 4698 scope.go:117] "RemoveContainer" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" Jan 27 14:52:52 crc kubenswrapper[4698]: E0127 14:52:52.106672 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:52:54 crc kubenswrapper[4698]: I0127 14:52:54.124400 4698 generic.go:334] "Generic (PLEG): container finished" podID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" containerID="4df5c0943ab012c25673290e4c14b44b6814a4422d5308e388a20962116a9f96" exitCode=0 Jan 27 14:52:54 crc kubenswrapper[4698]: I0127 14:52:54.124961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9jhwb" event={"ID":"51ba2ef6-17ab-4974-a2c6-7f995343e24b","Type":"ContainerDied","Data":"4df5c0943ab012c25673290e4c14b44b6814a4422d5308e388a20962116a9f96"} Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.139333 4698 generic.go:334] "Generic (PLEG): container finished" podID="2573c021-b642-4659-a97b-8c06bcf54afc" containerID="1dd9e638670767afe147060b8b877b23827741ca99584d820b8e17d9715e1c86" exitCode=137 Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.139428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerDied","Data":"1dd9e638670767afe147060b8b877b23827741ca99584d820b8e17d9715e1c86"} Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.631499 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.654932 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.786573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.786742 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.786796 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhskr\" (UniqueName: \"kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr\") pod \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.786848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.786906 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.787089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle\") pod \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.787263 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2rts\" (UniqueName: \"kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.787360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.787421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts\") pod \"2573c021-b642-4659-a97b-8c06bcf54afc\" (UID: \"2573c021-b642-4659-a97b-8c06bcf54afc\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.787514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data\") pod \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\" (UID: \"51ba2ef6-17ab-4974-a2c6-7f995343e24b\") " Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.789999 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs" (OuterVolumeSpecName: "logs") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.796692 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.796901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr" (OuterVolumeSpecName: "kube-api-access-hhskr") pod "51ba2ef6-17ab-4974-a2c6-7f995343e24b" (UID: "51ba2ef6-17ab-4974-a2c6-7f995343e24b"). InnerVolumeSpecName "kube-api-access-hhskr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.801940 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts" (OuterVolumeSpecName: "kube-api-access-t2rts") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "kube-api-access-t2rts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.810717 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "51ba2ef6-17ab-4974-a2c6-7f995343e24b" (UID: "51ba2ef6-17ab-4974-a2c6-7f995343e24b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.825616 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ba2ef6-17ab-4974-a2c6-7f995343e24b" (UID: "51ba2ef6-17ab-4974-a2c6-7f995343e24b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.827487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data" (OuterVolumeSpecName: "config-data") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.832201 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts" (OuterVolumeSpecName: "scripts") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.838668 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.852876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2573c021-b642-4659-a97b-8c06bcf54afc" (UID: "2573c021-b642-4659-a97b-8c06bcf54afc"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.890982 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2573c021-b642-4659-a97b-8c06bcf54afc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891037 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891053 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhskr\" (UniqueName: \"kubernetes.io/projected/51ba2ef6-17ab-4974-a2c6-7f995343e24b-kube-api-access-hhskr\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891067 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891080 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891092 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891104 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2rts\" (UniqueName: \"kubernetes.io/projected/2573c021-b642-4659-a97b-8c06bcf54afc-kube-api-access-t2rts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891118 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2573c021-b642-4659-a97b-8c06bcf54afc-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891129 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2573c021-b642-4659-a97b-8c06bcf54afc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:55 crc kubenswrapper[4698]: I0127 14:52:55.891139 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51ba2ef6-17ab-4974-a2c6-7f995343e24b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.153741 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58b77c584b-9tl65" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.153758 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58b77c584b-9tl65" event={"ID":"2573c021-b642-4659-a97b-8c06bcf54afc","Type":"ContainerDied","Data":"d6e247885db25606af135cf6591aeddb06a436393cb703f9a68720bb11018475"} Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.153836 4698 scope.go:117] "RemoveContainer" containerID="5854850d7e012ab315ae8c17136e56826d530bb56bfd3db7e41a47ebec3633a1" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.162844 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9jhwb" event={"ID":"51ba2ef6-17ab-4974-a2c6-7f995343e24b","Type":"ContainerDied","Data":"b72dee7fa8d69e01713b8e64e9d435b102e9c2dbf569c400a4dd451d8420eb62"} Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.162891 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72dee7fa8d69e01713b8e64e9d435b102e9c2dbf569c400a4dd451d8420eb62" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.163029 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9jhwb" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.231203 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.240326 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58b77c584b-9tl65"] Jan 27 14:52:56 crc kubenswrapper[4698]: E0127 14:52:56.373287 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ba2ef6_17ab_4974_a2c6_7f995343e24b.slice/crio-b72dee7fa8d69e01713b8e64e9d435b102e9c2dbf569c400a4dd451d8420eb62\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ba2ef6_17ab_4974_a2c6_7f995343e24b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2573c021_b642_4659_a97b_8c06bcf54afc.slice\": RecentStats: unable to find data in memory cache]" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.380385 4698 scope.go:117] "RemoveContainer" containerID="1dd9e638670767afe147060b8b877b23827741ca99584d820b8e17d9715e1c86" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.477687 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.522803 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b67df786-qljcv"] Jan 27 14:52:56 crc kubenswrapper[4698]: E0127 14:52:56.523230 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.523254 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" Jan 27 14:52:56 crc kubenswrapper[4698]: E0127 14:52:56.523288 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon-log" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.523298 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon-log" Jan 27 14:52:56 crc kubenswrapper[4698]: E0127 14:52:56.523317 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" containerName="barbican-db-sync" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.523327 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" containerName="barbican-db-sync" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.524120 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.525668 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" containerName="barbican-db-sync" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.525708 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" containerName="horizon-log" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.527751 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.535292 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xwvcj" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.535370 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.535303 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.560946 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b67df786-qljcv"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.574795 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86766cdbdc-v9752"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.578016 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.588783 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.608151 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvzp\" (UniqueName: \"kubernetes.io/projected/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-kube-api-access-hzvzp\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.608609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-logs\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.608667 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-combined-ca-bundle\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.608769 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.608870 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data-custom\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.646486 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.652348 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86766cdbdc-v9752"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718447 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49d8g\" (UniqueName: \"kubernetes.io/projected/62afb2bc-2a78-4012-9287-cd5812694245-kube-api-access-49d8g\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data-custom\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718570 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data-custom\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62afb2bc-2a78-4012-9287-cd5812694245-logs\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718723 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-combined-ca-bundle\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvzp\" (UniqueName: \"kubernetes.io/projected/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-kube-api-access-hzvzp\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718896 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-logs\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-combined-ca-bundle\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.718989 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.719036 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.720674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-logs\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.746161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.751349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-config-data-custom\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.752563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-combined-ca-bundle\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.769748 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.771762 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.781465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvzp\" (UniqueName: \"kubernetes.io/projected/aac0eb10-05fd-4d02-84f0-9c34458ef3ad-kube-api-access-hzvzp\") pod \"barbican-keystone-listener-b67df786-qljcv\" (UID: \"aac0eb10-05fd-4d02-84f0-9c34458ef3ad\") " pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.812683 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.821820 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49d8g\" (UniqueName: \"kubernetes.io/projected/62afb2bc-2a78-4012-9287-cd5812694245-kube-api-access-49d8g\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.821897 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data-custom\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.821957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62afb2bc-2a78-4012-9287-cd5812694245-logs\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.822002 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-combined-ca-bundle\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.822144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.824597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62afb2bc-2a78-4012-9287-cd5812694245-logs\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.829887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-combined-ca-bundle\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.830946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data-custom\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.831539 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afb2bc-2a78-4012-9287-cd5812694245-config-data\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.849624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b67df786-qljcv" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.870597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49d8g\" (UniqueName: \"kubernetes.io/projected/62afb2bc-2a78-4012-9287-cd5812694245-kube-api-access-49d8g\") pod \"barbican-worker-86766cdbdc-v9752\" (UID: \"62afb2bc-2a78-4012-9287-cd5812694245\") " pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.910818 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86766cdbdc-v9752" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.923896 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.926624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.933212 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937589 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.937652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:56 crc kubenswrapper[4698]: I0127 14:52:56.967288 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.038982 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2573c021-b642-4659-a97b-8c06bcf54afc" path="/var/lib/kubelet/pods/2573c021-b642-4659-a97b-8c06bcf54afc/volumes" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.041583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.041620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.041982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042083 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqgk\" (UniqueName: \"kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042169 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042191 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.042212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.043545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.044332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.045937 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.046179 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.047035 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.076193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t\") pod \"dnsmasq-dns-757b4bbb85-rxcnp\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.144117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.144177 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.144222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqgk\" (UniqueName: \"kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.144315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.144348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.148178 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.154956 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.167027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.168538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqgk\" (UniqueName: \"kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.290012 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.310691 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.330554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle\") pod \"barbican-api-6dd4d766f4-p4fgg\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.557256 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86766cdbdc-v9752"] Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.576446 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b67df786-qljcv"] Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.621267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:52:57 crc kubenswrapper[4698]: I0127 14:52:57.961134 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:52:57 crc kubenswrapper[4698]: W0127 14:52:57.968065 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb3c705e_7883_4e67_a66c_2b4120a30543.slice/crio-37fce07a312a6f3656c8625c2a612ee7e97ec4c3b49760a04c005122b0a6f764 WatchSource:0}: Error finding container 37fce07a312a6f3656c8625c2a612ee7e97ec4c3b49760a04c005122b0a6f764: Status 404 returned error can't find the container with id 37fce07a312a6f3656c8625c2a612ee7e97ec4c3b49760a04c005122b0a6f764 Jan 27 14:52:58 crc kubenswrapper[4698]: I0127 14:52:58.211812 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:52:58 crc kubenswrapper[4698]: W0127 14:52:58.222982 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded44350c_d49e_427e_9aaf_b4d3fb49aee4.slice/crio-70d0e89ff3068a2e3bf16ade5f620fc44501418f171d82a423897bacbf9fc78f WatchSource:0}: Error finding container 70d0e89ff3068a2e3bf16ade5f620fc44501418f171d82a423897bacbf9fc78f: Status 404 returned error can't find the container with id 70d0e89ff3068a2e3bf16ade5f620fc44501418f171d82a423897bacbf9fc78f Jan 27 14:52:58 crc kubenswrapper[4698]: I0127 14:52:58.231404 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b67df786-qljcv" event={"ID":"aac0eb10-05fd-4d02-84f0-9c34458ef3ad","Type":"ContainerStarted","Data":"a04b90520ba159ef5ae0fc938607407223216d8833afca1680ff8324b54f84de"} Jan 27 14:52:58 crc kubenswrapper[4698]: I0127 14:52:58.241299 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86766cdbdc-v9752" event={"ID":"62afb2bc-2a78-4012-9287-cd5812694245","Type":"ContainerStarted","Data":"be681c6eb12d0684dbffc28e8c01dfb5d3c54e197420532c8c26c79e2a84eaf7"} Jan 27 14:52:58 crc kubenswrapper[4698]: I0127 14:52:58.245126 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" event={"ID":"eb3c705e-7883-4e67-a66c-2b4120a30543","Type":"ContainerStarted","Data":"37fce07a312a6f3656c8625c2a612ee7e97ec4c3b49760a04c005122b0a6f764"} Jan 27 14:52:58 crc kubenswrapper[4698]: I0127 14:52:58.602479 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-646ddbcdff-wcvmz" Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.257085 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerStarted","Data":"f733025fdb94056d3759a11f9b172377baf6d7d42a3298eba530b1c095f22557"} Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.257444 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerStarted","Data":"70d0e89ff3068a2e3bf16ade5f620fc44501418f171d82a423897bacbf9fc78f"} Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.259189 4698 generic.go:334] "Generic (PLEG): container finished" podID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerID="2c47170d5e5cd77cb80dfc0920394a4bf2565cbc3d06b528001eb01d9d95d50b" exitCode=0 Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.259247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" event={"ID":"eb3c705e-7883-4e67-a66c-2b4120a30543","Type":"ContainerDied","Data":"2c47170d5e5cd77cb80dfc0920394a4bf2565cbc3d06b528001eb01d9d95d50b"} Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.900819 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bcc7f5b5b-nhf4c"] Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.902668 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.905873 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.910840 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 14:52:59 crc kubenswrapper[4698]: I0127 14:52:59.958970 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bcc7f5b5b-nhf4c"] Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.032872 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-combined-ca-bundle\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.032923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtpg\" (UniqueName: \"kubernetes.io/projected/92693e85-5559-4e51-8da7-b0ca1780cff8-kube-api-access-bqtpg\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.032946 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92693e85-5559-4e51-8da7-b0ca1780cff8-logs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.032997 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-internal-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.033044 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.033092 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data-custom\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.033119 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-public-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.136892 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-combined-ca-bundle\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.136949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtpg\" (UniqueName: \"kubernetes.io/projected/92693e85-5559-4e51-8da7-b0ca1780cff8-kube-api-access-bqtpg\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.136975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92693e85-5559-4e51-8da7-b0ca1780cff8-logs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.137035 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-internal-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.137088 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.137145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data-custom\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.137178 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-public-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.138191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92693e85-5559-4e51-8da7-b0ca1780cff8-logs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.151242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-internal-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.165440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-combined-ca-bundle\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.169267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-public-tls-certs\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.172301 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data-custom\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.179356 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92693e85-5559-4e51-8da7-b0ca1780cff8-config-data\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.184290 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtpg\" (UniqueName: \"kubernetes.io/projected/92693e85-5559-4e51-8da7-b0ca1780cff8-kube-api-access-bqtpg\") pod \"barbican-api-7bcc7f5b5b-nhf4c\" (UID: \"92693e85-5559-4e51-8da7-b0ca1780cff8\") " pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.289406 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.313281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86766cdbdc-v9752" event={"ID":"62afb2bc-2a78-4012-9287-cd5812694245","Type":"ContainerStarted","Data":"ac503454b3d75a74f523b337ed6a7604ae5c7bc7604ae6ef7cf4accda10972f5"} Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.323678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerStarted","Data":"8ee235ed10c9edc8df18e26672b2303e97d289db6177997a07c52c720d1b7a2a"} Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.324322 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.324365 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.338722 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" event={"ID":"eb3c705e-7883-4e67-a66c-2b4120a30543","Type":"ContainerStarted","Data":"1a7c5f9e925dc1b3241be99bb2e9f6e1908562dff026a19650c068e0a8b86a16"} Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.338798 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.350033 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b67df786-qljcv" event={"ID":"aac0eb10-05fd-4d02-84f0-9c34458ef3ad","Type":"ContainerStarted","Data":"e686021cf86dd47ac6a3e0d1f4f29d4afd205dff0f3220dade3a91a77dd58c99"} Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.350232 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b67df786-qljcv" event={"ID":"aac0eb10-05fd-4d02-84f0-9c34458ef3ad","Type":"ContainerStarted","Data":"586ab912dbb11e92cfef693f469fb65269d734c150558ca10f4da45306319400"} Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.407799 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.409911 4698 scope.go:117] "RemoveContainer" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" Jan 27 14:53:00 crc kubenswrapper[4698]: E0127 14:53:00.410353 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.431856 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podStartSLOduration=4.43183819 podStartE2EDuration="4.43183819s" podCreationTimestamp="2026-01-27 14:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:00.357950534 +0000 UTC m=+1436.034728029" watchObservedRunningTime="2026-01-27 14:53:00.43183819 +0000 UTC m=+1436.108615655" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.446502 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b67df786-qljcv" podStartSLOduration=2.516913949 podStartE2EDuration="4.446479525s" podCreationTimestamp="2026-01-27 14:52:56 +0000 UTC" firstStartedPulling="2026-01-27 14:52:57.585969194 +0000 UTC m=+1433.262746669" lastFinishedPulling="2026-01-27 14:52:59.51553478 +0000 UTC m=+1435.192312245" observedRunningTime="2026-01-27 14:53:00.385934171 +0000 UTC m=+1436.062711666" watchObservedRunningTime="2026-01-27 14:53:00.446479525 +0000 UTC m=+1436.123256990" Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.459760 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" podStartSLOduration=4.459743265 podStartE2EDuration="4.459743265s" podCreationTimestamp="2026-01-27 14:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:00.430069154 +0000 UTC m=+1436.106846609" watchObservedRunningTime="2026-01-27 14:53:00.459743265 +0000 UTC m=+1436.136520730" Jan 27 14:53:00 crc kubenswrapper[4698]: W0127 14:53:00.938581 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92693e85_5559_4e51_8da7_b0ca1780cff8.slice/crio-bdca8af975100a5dafcd9009d50dcefd39c4bf9e490b7f0297067016a36f6adf WatchSource:0}: Error finding container bdca8af975100a5dafcd9009d50dcefd39c4bf9e490b7f0297067016a36f6adf: Status 404 returned error can't find the container with id bdca8af975100a5dafcd9009d50dcefd39c4bf9e490b7f0297067016a36f6adf Jan 27 14:53:00 crc kubenswrapper[4698]: I0127 14:53:00.943059 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bcc7f5b5b-nhf4c"] Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.369226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86766cdbdc-v9752" event={"ID":"62afb2bc-2a78-4012-9287-cd5812694245","Type":"ContainerStarted","Data":"8fdbd5501d6f2cb6f6b492c7b46347484e2f8f3bb4caf081fa29fe700cd10481"} Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.390256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" event={"ID":"92693e85-5559-4e51-8da7-b0ca1780cff8","Type":"ContainerStarted","Data":"4b61ffb3f523f80f1fc5df38636b428def9bfcbc8556d8c9227e73290d1dce60"} Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.390340 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" event={"ID":"92693e85-5559-4e51-8da7-b0ca1780cff8","Type":"ContainerStarted","Data":"bdca8af975100a5dafcd9009d50dcefd39c4bf9e490b7f0297067016a36f6adf"} Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.400889 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86766cdbdc-v9752" podStartSLOduration=3.440341719 podStartE2EDuration="5.400866499s" podCreationTimestamp="2026-01-27 14:52:56 +0000 UTC" firstStartedPulling="2026-01-27 14:52:57.58045508 +0000 UTC m=+1433.257232555" lastFinishedPulling="2026-01-27 14:52:59.54097987 +0000 UTC m=+1435.217757335" observedRunningTime="2026-01-27 14:53:01.389150261 +0000 UTC m=+1437.065927746" watchObservedRunningTime="2026-01-27 14:53:01.400866499 +0000 UTC m=+1437.077643974" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.473085 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.474492 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.476690 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-p9gvn" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.477587 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.477927 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.501745 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.593144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config-secret\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.593565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.593867 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zndr9\" (UniqueName: \"kubernetes.io/projected/a11616dd-8398-4c71-829f-1a389df9495f-kube-api-access-zndr9\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.594130 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.697596 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.697745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config-secret\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.697774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.697839 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zndr9\" (UniqueName: \"kubernetes.io/projected/a11616dd-8398-4c71-829f-1a389df9495f-kube-api-access-zndr9\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.700913 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.706162 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.708919 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a11616dd-8398-4c71-829f-1a389df9495f-openstack-config-secret\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.730332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zndr9\" (UniqueName: \"kubernetes.io/projected/a11616dd-8398-4c71-829f-1a389df9495f-kube-api-access-zndr9\") pod \"openstackclient\" (UID: \"a11616dd-8398-4c71-829f-1a389df9495f\") " pod="openstack/openstackclient" Jan 27 14:53:01 crc kubenswrapper[4698]: I0127 14:53:01.809518 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:53:02 crc kubenswrapper[4698]: I0127 14:53:02.389401 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:53:02 crc kubenswrapper[4698]: W0127 14:53:02.395708 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda11616dd_8398_4c71_829f_1a389df9495f.slice/crio-85f154b7419aad57ff3306f4e212bc96f510ec97633de91d9dd01706bc8dca3a WatchSource:0}: Error finding container 85f154b7419aad57ff3306f4e212bc96f510ec97633de91d9dd01706bc8dca3a: Status 404 returned error can't find the container with id 85f154b7419aad57ff3306f4e212bc96f510ec97633de91d9dd01706bc8dca3a Jan 27 14:53:02 crc kubenswrapper[4698]: I0127 14:53:02.404097 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" event={"ID":"92693e85-5559-4e51-8da7-b0ca1780cff8","Type":"ContainerStarted","Data":"b8beed0fc0fd64bb0fcf9c2e04c64c8a96edd0022ab2cda1de7e2be5f0d50d8f"} Jan 27 14:53:02 crc kubenswrapper[4698]: I0127 14:53:02.434837 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" podStartSLOduration=3.434816068 podStartE2EDuration="3.434816068s" podCreationTimestamp="2026-01-27 14:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:02.427148006 +0000 UTC m=+1438.103925471" watchObservedRunningTime="2026-01-27 14:53:02.434816068 +0000 UTC m=+1438.111593533" Jan 27 14:53:03 crc kubenswrapper[4698]: I0127 14:53:03.421034 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a11616dd-8398-4c71-829f-1a389df9495f","Type":"ContainerStarted","Data":"85f154b7419aad57ff3306f4e212bc96f510ec97633de91d9dd01706bc8dca3a"} Jan 27 14:53:03 crc kubenswrapper[4698]: I0127 14:53:03.421375 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:03 crc kubenswrapper[4698]: I0127 14:53:03.421398 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:04 crc kubenswrapper[4698]: I0127 14:53:04.434284 4698 generic.go:334] "Generic (PLEG): container finished" podID="74946770-13e5-4777-a645-bb6bee73c277" containerID="52df319c8fd5806e1b6d043e0c56391797aa95b270b1a3ecdf734c7dec22e5f1" exitCode=0 Jan 27 14:53:04 crc kubenswrapper[4698]: I0127 14:53:04.434370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcnmn" event={"ID":"74946770-13e5-4777-a645-bb6bee73c277","Type":"ContainerDied","Data":"52df319c8fd5806e1b6d043e0c56391797aa95b270b1a3ecdf734c7dec22e5f1"} Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.927417 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970468 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970541 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970580 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970620 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970712 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.970755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcpmd\" (UniqueName: \"kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd\") pod \"74946770-13e5-4777-a645-bb6bee73c277\" (UID: \"74946770-13e5-4777-a645-bb6bee73c277\") " Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.971434 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.980861 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd" (OuterVolumeSpecName: "kube-api-access-jcpmd") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "kube-api-access-jcpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.980927 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:05 crc kubenswrapper[4698]: I0127 14:53:05.996368 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts" (OuterVolumeSpecName: "scripts") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.037306 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data" (OuterVolumeSpecName: "config-data") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.052829 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74946770-13e5-4777-a645-bb6bee73c277" (UID: "74946770-13e5-4777-a645-bb6bee73c277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073795 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073842 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcpmd\" (UniqueName: \"kubernetes.io/projected/74946770-13e5-4777-a645-bb6bee73c277-kube-api-access-jcpmd\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073856 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74946770-13e5-4777-a645-bb6bee73c277-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073867 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073878 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.073888 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74946770-13e5-4777-a645-bb6bee73c277-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.492152 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcnmn" event={"ID":"74946770-13e5-4777-a645-bb6bee73c277","Type":"ContainerDied","Data":"f9d55be61f3884eba26ac43ba06a8b6c8ce4de7a43ae7eb87d1b0e1850cc4feb"} Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.492508 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9d55be61f3884eba26ac43ba06a8b6c8ce4de7a43ae7eb87d1b0e1850cc4feb" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.492400 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcnmn" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.788904 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:06 crc kubenswrapper[4698]: E0127 14:53:06.789321 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74946770-13e5-4777-a645-bb6bee73c277" containerName="cinder-db-sync" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.789332 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="74946770-13e5-4777-a645-bb6bee73c277" containerName="cinder-db-sync" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.789502 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="74946770-13e5-4777-a645-bb6bee73c277" containerName="cinder-db-sync" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.790505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.792897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.793141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72wr\" (UniqueName: \"kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.793221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.793354 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.793512 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.793606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.799345 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zqtt2" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.799681 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.799824 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.799962 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.803421 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:06 crc kubenswrapper[4698]: E0127 14:53:06.851603 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74946770_13e5_4777_a645_bb6bee73c277.slice\": RecentStats: unable to find data in memory cache]" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.862155 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.862399 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" containerID="cri-o://1a7c5f9e925dc1b3241be99bb2e9f6e1908562dff026a19650c068e0a8b86a16" gracePeriod=10 Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.866806 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.897240 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.897293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.897894 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.897979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.898021 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.898121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x72wr\" (UniqueName: \"kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.898160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.906339 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.910264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.915143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.932495 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.939580 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.956348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:06 crc kubenswrapper[4698]: I0127 14:53:06.976544 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72wr\" (UniqueName: \"kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr\") pod \"cinder-scheduler-0\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.038904 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.109312 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.109386 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49tr\" (UniqueName: \"kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.109449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.109572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.109779 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.110839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.118722 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.120869 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.128879 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.149292 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.161009 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.213112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.213184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.213401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.213449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x49tr\" (UniqueName: \"kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.216112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.216512 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.216668 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.216443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.216999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.217712 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.217852 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.242100 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x49tr\" (UniqueName: \"kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr\") pod \"dnsmasq-dns-754ff55b87-tpb84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.311997 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlzx\" (UniqueName: \"kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322626 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322705 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322727 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322808 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.322824 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.408596 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424449 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlzx\" (UniqueName: \"kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424681 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424753 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424794 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.424818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.425674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.425733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.432343 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.437274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.441420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.441848 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.452619 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlzx\" (UniqueName: \"kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx\") pod \"cinder-api-0\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.491623 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.541245 4698 generic.go:334] "Generic (PLEG): container finished" podID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerID="1a7c5f9e925dc1b3241be99bb2e9f6e1908562dff026a19650c068e0a8b86a16" exitCode=0 Jan 27 14:53:07 crc kubenswrapper[4698]: I0127 14:53:07.541300 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" event={"ID":"eb3c705e-7883-4e67-a66c-2b4120a30543","Type":"ContainerDied","Data":"1a7c5f9e925dc1b3241be99bb2e9f6e1908562dff026a19650c068e0a8b86a16"} Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.587377 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.948089 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f99cfdc45-gkb5v"] Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.949806 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.952574 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.952618 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.958179 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 14:53:09 crc kubenswrapper[4698]: I0127 14:53:09.966021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f99cfdc45-gkb5v"] Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-config-data\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-combined-ca-bundle\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlc5t\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-kube-api-access-jlc5t\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-run-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-internal-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-log-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-etc-swift\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.001547 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-public-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-config-data\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-combined-ca-bundle\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlc5t\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-kube-api-access-jlc5t\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-run-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-internal-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-log-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103672 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-etc-swift\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.103726 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-public-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.104138 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-run-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.104493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-log-httpd\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.123082 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-combined-ca-bundle\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.123168 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-internal-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.127741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-etc-swift\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.139024 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-public-tls-certs\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.139299 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-config-data\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.140839 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlc5t\" (UniqueName: \"kubernetes.io/projected/296c42dd-3876-4f34-9d1c-f0b1cc1b3303-kube-api-access-jlc5t\") pod \"swift-proxy-6f99cfdc45-gkb5v\" (UID: \"296c42dd-3876-4f34-9d1c-f0b1cc1b3303\") " pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.205368 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.319511 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.406633 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.406718 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.407494 4698 scope.go:117] "RemoveContainer" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" Jan 27 14:53:10 crc kubenswrapper[4698]: I0127 14:53:10.426350 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:53:11 crc kubenswrapper[4698]: I0127 14:53:11.271163 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.077923 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.078224 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-central-agent" containerID="cri-o://1ab95197cc99fbdad3e726bc7b972c4c3e64739f16d6449344c8acb9f11caf99" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.078261 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="proxy-httpd" containerID="cri-o://e180fc33b37136a8beea775412a972cdd73e909916fe36ab7f07411cf2f93635" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.078311 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-notification-agent" containerID="cri-o://6804ba764d4a6d87d037f1c26220e544d6cfd46db87688ebffb08cf729e0c954" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.078339 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="sg-core" containerID="cri-o://9939565094b72088e31edaea1834ee5f03f5bc9ad71838ac70b839170591e4d6" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.311463 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.549181 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.630346 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerID="e180fc33b37136a8beea775412a972cdd73e909916fe36ab7f07411cf2f93635" exitCode=0 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.631223 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerID="9939565094b72088e31edaea1834ee5f03f5bc9ad71838ac70b839170591e4d6" exitCode=2 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.630459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerDied","Data":"e180fc33b37136a8beea775412a972cdd73e909916fe36ab7f07411cf2f93635"} Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.631298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerDied","Data":"9939565094b72088e31edaea1834ee5f03f5bc9ad71838ac70b839170591e4d6"} Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.802333 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bcc7f5b5b-nhf4c" Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.895216 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.895497 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" containerID="cri-o://f733025fdb94056d3759a11f9b172377baf6d7d42a3298eba530b1c095f22557" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.896172 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api" containerID="cri-o://8ee235ed10c9edc8df18e26672b2303e97d289db6177997a07c52c720d1b7a2a" gracePeriod=30 Jan 27 14:53:12 crc kubenswrapper[4698]: I0127 14:53:12.918436 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": EOF" Jan 27 14:53:13 crc kubenswrapper[4698]: I0127 14:53:13.645182 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerID="1ab95197cc99fbdad3e726bc7b972c4c3e64739f16d6449344c8acb9f11caf99" exitCode=0 Jan 27 14:53:13 crc kubenswrapper[4698]: I0127 14:53:13.645271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerDied","Data":"1ab95197cc99fbdad3e726bc7b972c4c3e64739f16d6449344c8acb9f11caf99"} Jan 27 14:53:13 crc kubenswrapper[4698]: I0127 14:53:13.648354 4698 generic.go:334] "Generic (PLEG): container finished" podID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerID="f733025fdb94056d3759a11f9b172377baf6d7d42a3298eba530b1c095f22557" exitCode=143 Jan 27 14:53:13 crc kubenswrapper[4698]: I0127 14:53:13.648445 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerDied","Data":"f733025fdb94056d3759a11f9b172377baf6d7d42a3298eba530b1c095f22557"} Jan 27 14:53:14 crc kubenswrapper[4698]: I0127 14:53:14.662031 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerID="6804ba764d4a6d87d037f1c26220e544d6cfd46db87688ebffb08cf729e0c954" exitCode=0 Jan 27 14:53:14 crc kubenswrapper[4698]: I0127 14:53:14.662107 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerDied","Data":"6804ba764d4a6d87d037f1c26220e544d6cfd46db87688ebffb08cf729e0c954"} Jan 27 14:53:15 crc kubenswrapper[4698]: I0127 14:53:15.779278 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": read tcp 10.217.0.2:39606->10.217.0.180:9311: read: connection reset by peer" Jan 27 14:53:16 crc kubenswrapper[4698]: I0127 14:53:16.695356 4698 generic.go:334] "Generic (PLEG): container finished" podID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerID="8ee235ed10c9edc8df18e26672b2303e97d289db6177997a07c52c720d1b7a2a" exitCode=0 Jan 27 14:53:16 crc kubenswrapper[4698]: I0127 14:53:16.695541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerDied","Data":"8ee235ed10c9edc8df18e26672b2303e97d289db6177997a07c52c720d1b7a2a"} Jan 27 14:53:17 crc kubenswrapper[4698]: E0127 14:53:17.008985 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 27 14:53:17 crc kubenswrapper[4698]: E0127 14:53:17.009408 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 27 14:53:17 crc kubenswrapper[4698]: E0127 14:53:17.009555 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.111:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ffhf9h5h657h546h665h597h5fbh5cbhc5hb9h98hddh547h5b6h7chb5h664h8h64fhf7h5chd4h589hc7h676h585h5b8h5ffh657h584h5b6q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zndr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(a11616dd-8398-4c71-829f-1a389df9495f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:53:17 crc kubenswrapper[4698]: E0127 14:53:17.010808 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="a11616dd-8398-4c71-829f-1a389df9495f" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.311084 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.312770 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.643272 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.712448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa09210f-2d05-4dc9-bf03-3af614176a09","Type":"ContainerDied","Data":"f4aee7d0377a83a7f0085cf031e2c8a10942389e73769bba15022bcef67cdad4"} Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.712516 4698 scope.go:117] "RemoveContainer" containerID="e180fc33b37136a8beea775412a972cdd73e909916fe36ab7f07411cf2f93635" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.712461 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.733569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerStarted","Data":"e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307"} Jan 27 14:53:17 crc kubenswrapper[4698]: E0127 14:53:17.738719 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="a11616dd-8398-4c71-829f-1a389df9495f" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.763855 4698 scope.go:117] "RemoveContainer" containerID="9939565094b72088e31edaea1834ee5f03f5bc9ad71838ac70b839170591e4d6" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.774825 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.774985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.775037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.775128 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.775198 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.775273 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.775308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvtv\" (UniqueName: \"kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.779702 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.796417 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.799419 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts" (OuterVolumeSpecName: "scripts") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.799477 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv" (OuterVolumeSpecName: "kube-api-access-8fvtv") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "kube-api-access-8fvtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.828300 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.877627 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.877675 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.877683 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.877692 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa09210f-2d05-4dc9-bf03-3af614176a09-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.877700 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fvtv\" (UniqueName: \"kubernetes.io/projected/aa09210f-2d05-4dc9-bf03-3af614176a09-kube-api-access-8fvtv\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.989387 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data" (OuterVolumeSpecName: "config-data") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.991548 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") pod \"aa09210f-2d05-4dc9-bf03-3af614176a09\" (UID: \"aa09210f-2d05-4dc9-bf03-3af614176a09\") " Jan 27 14:53:17 crc kubenswrapper[4698]: W0127 14:53:17.996125 4698 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aa09210f-2d05-4dc9-bf03-3af614176a09/volumes/kubernetes.io~secret/config-data Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.996155 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data" (OuterVolumeSpecName: "config-data") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:17 crc kubenswrapper[4698]: I0127 14:53:17.998022 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.011907 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.039475 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa09210f-2d05-4dc9-bf03-3af614176a09" (UID: "aa09210f-2d05-4dc9-bf03-3af614176a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.056697 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.067193 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.074160 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.102772 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa09210f-2d05-4dc9-bf03-3af614176a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.108588 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.140489 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f99cfdc45-gkb5v"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.155291 4698 scope.go:117] "RemoveContainer" containerID="6804ba764d4a6d87d037f1c26220e544d6cfd46db87688ebffb08cf729e0c954" Jan 27 14:53:18 crc kubenswrapper[4698]: W0127 14:53:18.157999 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod296c42dd_3876_4f34_9d1c_f0b1cc1b3303.slice/crio-2077b1f6f20ab988009225e72d29cc4018cbd907b68123015a437e233f372051 WatchSource:0}: Error finding container 2077b1f6f20ab988009225e72d29cc4018cbd907b68123015a437e233f372051: Status 404 returned error can't find the container with id 2077b1f6f20ab988009225e72d29cc4018cbd907b68123015a437e233f372051 Jan 27 14:53:18 crc kubenswrapper[4698]: W0127 14:53:18.177170 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a624b91_5853_4b9f_a75c_101d75550a84.slice/crio-14c0ca6312a7059964455b887e2d7c3d515751d00bf928362b946b3050d8396b WatchSource:0}: Error finding container 14c0ca6312a7059964455b887e2d7c3d515751d00bf928362b946b3050d8396b: Status 404 returned error can't find the container with id 14c0ca6312a7059964455b887e2d7c3d515751d00bf928362b946b3050d8396b Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203534 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom\") pod \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203803 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqqgk\" (UniqueName: \"kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk\") pod \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203855 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data\") pod \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203904 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.203986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle\") pod \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.204035 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs\") pod \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\" (UID: \"ed44350c-d49e-427e-9aaf-b4d3fb49aee4\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.204063 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb\") pod \"eb3c705e-7883-4e67-a66c-2b4120a30543\" (UID: \"eb3c705e-7883-4e67-a66c-2b4120a30543\") " Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.212229 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs" (OuterVolumeSpecName: "logs") pod "ed44350c-d49e-427e-9aaf-b4d3fb49aee4" (UID: "ed44350c-d49e-427e-9aaf-b4d3fb49aee4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.215157 4698 scope.go:117] "RemoveContainer" containerID="1ab95197cc99fbdad3e726bc7b972c4c3e64739f16d6449344c8acb9f11caf99" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.218367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t" (OuterVolumeSpecName: "kube-api-access-x597t") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "kube-api-access-x597t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.223866 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk" (OuterVolumeSpecName: "kube-api-access-tqqgk") pod "ed44350c-d49e-427e-9aaf-b4d3fb49aee4" (UID: "ed44350c-d49e-427e-9aaf-b4d3fb49aee4"). InnerVolumeSpecName "kube-api-access-tqqgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.242133 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed44350c-d49e-427e-9aaf-b4d3fb49aee4" (UID: "ed44350c-d49e-427e-9aaf-b4d3fb49aee4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.310321 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/eb3c705e-7883-4e67-a66c-2b4120a30543-kube-api-access-x597t\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.310671 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqqgk\" (UniqueName: \"kubernetes.io/projected/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-kube-api-access-tqqgk\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.310690 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.310704 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.337771 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed44350c-d49e-427e-9aaf-b4d3fb49aee4" (UID: "ed44350c-d49e-427e-9aaf-b4d3fb49aee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.370273 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.377457 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.382762 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.395836 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.398917 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.400908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config" (OuterVolumeSpecName: "config") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.403986 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb3c705e-7883-4e67-a66c-2b4120a30543" (UID: "eb3c705e-7883-4e67-a66c-2b4120a30543"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.409874 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410491 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-notification-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410521 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-notification-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410542 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="init" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410551 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="init" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410569 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410577 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410593 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="proxy-httpd" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410600 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="proxy-httpd" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410611 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410618 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410665 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="sg-core" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410675 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="sg-core" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410690 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-central-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410697 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-central-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: E0127 14:53:18.410710 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410717 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410951 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-notification-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410972 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.410988 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="proxy-httpd" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.411000 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="ceilometer-central-agent" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.411015 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" containerName="dnsmasq-dns" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.411028 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" containerName="sg-core" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.411037 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413130 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413165 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413177 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413189 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413201 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.413212 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb3c705e-7883-4e67-a66c-2b4120a30543-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.416279 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.419961 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.420781 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data" (OuterVolumeSpecName: "config-data") pod "ed44350c-d49e-427e-9aaf-b4d3fb49aee4" (UID: "ed44350c-d49e-427e-9aaf-b4d3fb49aee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.421064 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.429584 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515116 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515164 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgr86\" (UniqueName: \"kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.515322 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed44350c-d49e-427e-9aaf-b4d3fb49aee4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.616842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.616908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgr86\" (UniqueName: \"kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.616959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.616986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.617015 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.617031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.617055 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.618615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.618743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.622138 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.622419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.622564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.626007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.640340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgr86\" (UniqueName: \"kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86\") pod \"ceilometer-0\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.757263 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.760931 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerStarted","Data":"284cd10d7719279fd7b24ce273afcd70d1b55b33872ebdcdf0de24b3c8105cfa"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.764289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" event={"ID":"eb3c705e-7883-4e67-a66c-2b4120a30543","Type":"ContainerDied","Data":"37fce07a312a6f3656c8625c2a612ee7e97ec4c3b49760a04c005122b0a6f764"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.764347 4698 scope.go:117] "RemoveContainer" containerID="1a7c5f9e925dc1b3241be99bb2e9f6e1908562dff026a19650c068e0a8b86a16" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.764512 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4bbb85-rxcnp" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.778766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" event={"ID":"296c42dd-3876-4f34-9d1c-f0b1cc1b3303","Type":"ContainerStarted","Data":"1f8863ac5975ee710809b64433dd8a8dd87baf199182c72d89a2d5d0d0c7a2e9"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.778809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" event={"ID":"296c42dd-3876-4f34-9d1c-f0b1cc1b3303","Type":"ContainerStarted","Data":"2077b1f6f20ab988009225e72d29cc4018cbd907b68123015a437e233f372051"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.786946 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerStarted","Data":"a473e2d3968d0af868be37108af3f2df5cc377fccd23de08fd0c1ffc1c36b68d"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.787010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerStarted","Data":"14c0ca6312a7059964455b887e2d7c3d515751d00bf928362b946b3050d8396b"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.790015 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerStarted","Data":"f401704318b52a5a4907886f3f03f76080e5a077b32f77addf450e4fcf73ef37"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.793613 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd4d766f4-p4fgg" event={"ID":"ed44350c-d49e-427e-9aaf-b4d3fb49aee4","Type":"ContainerDied","Data":"70d0e89ff3068a2e3bf16ade5f620fc44501418f171d82a423897bacbf9fc78f"} Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.793647 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd4d766f4-p4fgg" Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.890983 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.903243 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4bbb85-rxcnp"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.912307 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:53:18 crc kubenswrapper[4698]: I0127 14:53:18.923900 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dd4d766f4-p4fgg"] Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.022958 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa09210f-2d05-4dc9-bf03-3af614176a09" path="/var/lib/kubelet/pods/aa09210f-2d05-4dc9-bf03-3af614176a09/volumes" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.024296 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3c705e-7883-4e67-a66c-2b4120a30543" path="/var/lib/kubelet/pods/eb3c705e-7883-4e67-a66c-2b4120a30543/volumes" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.025020 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" path="/var/lib/kubelet/pods/ed44350c-d49e-427e-9aaf-b4d3fb49aee4/volumes" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.067925 4698 scope.go:117] "RemoveContainer" containerID="2c47170d5e5cd77cb80dfc0920394a4bf2565cbc3d06b528001eb01d9d95d50b" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.209780 4698 scope.go:117] "RemoveContainer" containerID="8ee235ed10c9edc8df18e26672b2303e97d289db6177997a07c52c720d1b7a2a" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.322147 4698 scope.go:117] "RemoveContainer" containerID="f733025fdb94056d3759a11f9b172377baf6d7d42a3298eba530b1c095f22557" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.538034 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.538279 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5d70f810-b592-4abf-b587-4ff75b743944" containerName="kube-state-metrics" containerID="cri-o://0726419db0f961bba21296320695af51ff8e1cbd4aa57ac86253c88afbcf1b9f" gracePeriod=30 Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.597845 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.844742 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a624b91-5853-4b9f-a75c-101d75550a84" containerID="a473e2d3968d0af868be37108af3f2df5cc377fccd23de08fd0c1ffc1c36b68d" exitCode=0 Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.845527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerDied","Data":"a473e2d3968d0af868be37108af3f2df5cc377fccd23de08fd0c1ffc1c36b68d"} Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.852199 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerStarted","Data":"ddf5b58493bca391df1f1efe456a50603ef133484def3124af4f57ff083e873a"} Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.855830 4698 generic.go:334] "Generic (PLEG): container finished" podID="5d70f810-b592-4abf-b587-4ff75b743944" containerID="0726419db0f961bba21296320695af51ff8e1cbd4aa57ac86253c88afbcf1b9f" exitCode=2 Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.855904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d70f810-b592-4abf-b587-4ff75b743944","Type":"ContainerDied","Data":"0726419db0f961bba21296320695af51ff8e1cbd4aa57ac86253c88afbcf1b9f"} Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.862056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerStarted","Data":"97622ace0e032ad8dd91c63a4d60c121801d188b422fed63ccb55844ac7aa8e3"} Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.915963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" event={"ID":"296c42dd-3876-4f34-9d1c-f0b1cc1b3303","Type":"ContainerStarted","Data":"bba291bb89d98954eab4f935f319ae8383d93a85dd75207ce2113a765cdbb834"} Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.917082 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.917113 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:19 crc kubenswrapper[4698]: I0127 14:53:19.963276 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" podStartSLOduration=10.963252156 podStartE2EDuration="10.963252156s" podCreationTimestamp="2026-01-27 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:19.944950694 +0000 UTC m=+1455.621728179" watchObservedRunningTime="2026-01-27 14:53:19.963252156 +0000 UTC m=+1455.640029621" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.346911 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.412948 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.470694 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.516028 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7c48\" (UniqueName: \"kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48\") pod \"5d70f810-b592-4abf-b587-4ff75b743944\" (UID: \"5d70f810-b592-4abf-b587-4ff75b743944\") " Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.539034 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48" (OuterVolumeSpecName: "kube-api-access-t7c48") pod "5d70f810-b592-4abf-b587-4ff75b743944" (UID: "5d70f810-b592-4abf-b587-4ff75b743944"). InnerVolumeSpecName "kube-api-access-t7c48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.621109 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7c48\" (UniqueName: \"kubernetes.io/projected/5d70f810-b592-4abf-b587-4ff75b743944-kube-api-access-t7c48\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.943963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d70f810-b592-4abf-b587-4ff75b743944","Type":"ContainerDied","Data":"0d24411367c4e28674967153a617fc7d0aa1187a94fbb1fe871c3cea7df3d590"} Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.944332 4698 scope.go:117] "RemoveContainer" containerID="0726419db0f961bba21296320695af51ff8e1cbd4aa57ac86253c88afbcf1b9f" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.944469 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.955224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerStarted","Data":"e02ba54c5fb20416d1b43d931282eeb84845c9c93aa7cc4b398e82ee40fab7a2"} Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.968197 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerStarted","Data":"26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2"} Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.968793 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.972726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerStarted","Data":"6f5d08ec4310afd7be081845d272029c667e5697ac873ffdb807b96c99fd59a4"} Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.972816 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api-log" containerID="cri-o://ddf5b58493bca391df1f1efe456a50603ef133484def3124af4f57ff083e873a" gracePeriod=30 Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.972857 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api" containerID="cri-o://6f5d08ec4310afd7be081845d272029c667e5697ac873ffdb807b96c99fd59a4" gracePeriod=30 Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.972971 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.973146 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:20 crc kubenswrapper[4698]: I0127 14:53:20.985432 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.020913 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.020951 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.050384 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:21 crc kubenswrapper[4698]: E0127 14:53:21.050834 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d70f810-b592-4abf-b587-4ff75b743944" containerName="kube-state-metrics" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.050851 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d70f810-b592-4abf-b587-4ff75b743944" containerName="kube-state-metrics" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.051064 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d70f810-b592-4abf-b587-4ff75b743944" containerName="kube-state-metrics" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.052352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.055822 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.066182 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.093668 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.108028 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" podStartSLOduration=15.108009582 podStartE2EDuration="15.108009582s" podCreationTimestamp="2026-01-27 14:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:21.024455023 +0000 UTC m=+1456.701232488" watchObservedRunningTime="2026-01-27 14:53:21.108009582 +0000 UTC m=+1456.784787047" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.116542 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=14.116522466 podStartE2EDuration="14.116522466s" podCreationTimestamp="2026-01-27 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:21.065924004 +0000 UTC m=+1456.742701479" watchObservedRunningTime="2026-01-27 14:53:21.116522466 +0000 UTC m=+1456.793299941" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.130615 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.132432 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.132557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdk9\" (UniqueName: \"kubernetes.io/projected/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-api-access-mvdk9\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.132617 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.234301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdk9\" (UniqueName: \"kubernetes.io/projected/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-api-access-mvdk9\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.234385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.234554 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.234756 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.241674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.242682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.255663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdk9\" (UniqueName: \"kubernetes.io/projected/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-api-access-mvdk9\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.262984 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a222d8-9b89-4da5-919a-cbe5f3ecfd33-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33\") " pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.402358 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.985013 4698 generic.go:334] "Generic (PLEG): container finished" podID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerID="6f5d08ec4310afd7be081845d272029c667e5697ac873ffdb807b96c99fd59a4" exitCode=0 Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.985274 4698 generic.go:334] "Generic (PLEG): container finished" podID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerID="ddf5b58493bca391df1f1efe456a50603ef133484def3124af4f57ff083e873a" exitCode=143 Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.985123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerDied","Data":"6f5d08ec4310afd7be081845d272029c667e5697ac873ffdb807b96c99fd59a4"} Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.985383 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerDied","Data":"ddf5b58493bca391df1f1efe456a50603ef133484def3124af4f57ff083e873a"} Jan 27 14:53:21 crc kubenswrapper[4698]: I0127 14:53:21.989720 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerStarted","Data":"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac"} Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.259057 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.295576 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.306403 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c59487f6-d4xj7" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.354834 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvlzx\" (UniqueName: \"kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.354946 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.354977 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.355012 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.355127 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.355204 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.355251 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id\") pod \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\" (UID: \"75bcd64d-b81b-456e-b9e6-1f26a52942d9\") " Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.356393 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.364389 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs" (OuterVolumeSpecName: "logs") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.365919 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx" (OuterVolumeSpecName: "kube-api-access-lvlzx") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "kube-api-access-lvlzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.367931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.371883 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts" (OuterVolumeSpecName: "scripts") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.395868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.443808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data" (OuterVolumeSpecName: "config-data") pod "75bcd64d-b81b-456e-b9e6-1f26a52942d9" (UID: "75bcd64d-b81b-456e-b9e6-1f26a52942d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457358 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457386 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457397 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457406 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bcd64d-b81b-456e-b9e6-1f26a52942d9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457414 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bcd64d-b81b-456e-b9e6-1f26a52942d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457438 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75bcd64d-b81b-456e-b9e6-1f26a52942d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.457448 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvlzx\" (UniqueName: \"kubernetes.io/projected/75bcd64d-b81b-456e-b9e6-1f26a52942d9-kube-api-access-lvlzx\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.624209 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": dial tcp 10.217.0.180:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.626034 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd4d766f4-p4fgg" podUID="ed44350c-d49e-427e-9aaf-b4d3fb49aee4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": dial tcp 10.217.0.180:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 27 14:53:22 crc kubenswrapper[4698]: I0127 14:53:22.626530 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.013458 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d70f810-b592-4abf-b587-4ff75b743944" path="/var/lib/kubelet/pods/5d70f810-b592-4abf-b587-4ff75b743944/volumes" Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.022024 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.022081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"75bcd64d-b81b-456e-b9e6-1f26a52942d9","Type":"ContainerDied","Data":"f401704318b52a5a4907886f3f03f76080e5a077b32f77addf450e4fcf73ef37"} Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.022126 4698 scope.go:117] "RemoveContainer" containerID="6f5d08ec4310afd7be081845d272029c667e5697ac873ffdb807b96c99fd59a4" Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.041982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerStarted","Data":"16deaa9974830db681609fa953adb8dc360e1daed46981b8bdd43aecc059aa78"} Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.051900 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33","Type":"ContainerStarted","Data":"a8179ffd3a4f4aa922b328089752b1f380a0279d43ba81f520c35a3e8fe032c4"} Jan 27 14:53:23 crc kubenswrapper[4698]: I0127 14:53:23.498993 4698 scope.go:117] "RemoveContainer" containerID="ddf5b58493bca391df1f1efe456a50603ef133484def3124af4f57ff083e873a" Jan 27 14:53:24 crc kubenswrapper[4698]: I0127 14:53:24.064047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerStarted","Data":"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc"} Jan 27 14:53:24 crc kubenswrapper[4698]: I0127 14:53:24.968554 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.077173 4698 generic.go:334] "Generic (PLEG): container finished" podID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" exitCode=1 Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.077759 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerDied","Data":"e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307"} Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.077925 4698 scope.go:117] "RemoveContainer" containerID="8da5bc0af3d507c438f5062981c57f9b3bc194f49ed6a075d61a10c7dba95272" Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.078184 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:53:25 crc kubenswrapper[4698]: E0127 14:53:25.078401 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.115420 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=17.246621402 podStartE2EDuration="19.115403366s" podCreationTimestamp="2026-01-27 14:53:06 +0000 UTC" firstStartedPulling="2026-01-27 14:53:18.027999441 +0000 UTC m=+1453.704776906" lastFinishedPulling="2026-01-27 14:53:19.896781405 +0000 UTC m=+1455.573558870" observedRunningTime="2026-01-27 14:53:25.11403861 +0000 UTC m=+1460.790816065" watchObservedRunningTime="2026-01-27 14:53:25.115403366 +0000 UTC m=+1460.792180831" Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.366770 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:25 crc kubenswrapper[4698]: I0127 14:53:25.383709 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f99cfdc45-gkb5v" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.100423 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerStarted","Data":"8053b92db1128e6835df685903c148fc0fda200225f265aa297962285ac49f38"} Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.104419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f7a222d8-9b89-4da5-919a-cbe5f3ecfd33","Type":"ContainerStarted","Data":"5d3449b187c68251d16aa324d227a626b568433a09e67f885eeb104d89b29f77"} Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.104571 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.142450 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.853112784 podStartE2EDuration="7.142426338s" podCreationTimestamp="2026-01-27 14:53:20 +0000 UTC" firstStartedPulling="2026-01-27 14:53:22.684609692 +0000 UTC m=+1458.361387157" lastFinishedPulling="2026-01-27 14:53:25.973923246 +0000 UTC m=+1461.650700711" observedRunningTime="2026-01-27 14:53:27.129165119 +0000 UTC m=+1462.805942584" watchObservedRunningTime="2026-01-27 14:53:27.142426338 +0000 UTC m=+1462.819203803" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.150165 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.363163 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.410804 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.480645 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:53:27 crc kubenswrapper[4698]: I0127 14:53:27.481457 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="dnsmasq-dns" containerID="cri-o://0cf18963bd797e184b883954078eb8d75002dc4c64f806d8dae9ff5cb2051adc" gracePeriod=10 Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.117048 4698 generic.go:334] "Generic (PLEG): container finished" podID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerID="0cf18963bd797e184b883954078eb8d75002dc4c64f806d8dae9ff5cb2051adc" exitCode=0 Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.117138 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" event={"ID":"9ee879ea-5497-4147-ab6d-5e352fda0d9f","Type":"ContainerDied","Data":"0cf18963bd797e184b883954078eb8d75002dc4c64f806d8dae9ff5cb2051adc"} Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.175787 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.686003 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.789555 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.789710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.789801 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.789942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.790025 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.790121 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbxl8\" (UniqueName: \"kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8\") pod \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\" (UID: \"9ee879ea-5497-4147-ab6d-5e352fda0d9f\") " Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.807068 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8" (OuterVolumeSpecName: "kube-api-access-sbxl8") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "kube-api-access-sbxl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.852323 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.871786 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.876606 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.889841 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.892885 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.892928 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.892944 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.892958 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.892969 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbxl8\" (UniqueName: \"kubernetes.io/projected/9ee879ea-5497-4147-ab6d-5e352fda0d9f-kube-api-access-sbxl8\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.896217 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config" (OuterVolumeSpecName: "config") pod "9ee879ea-5497-4147-ab6d-5e352fda0d9f" (UID: "9ee879ea-5497-4147-ab6d-5e352fda0d9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:28 crc kubenswrapper[4698]: I0127 14:53:28.994582 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ee879ea-5497-4147-ab6d-5e352fda0d9f-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.128556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" event={"ID":"9ee879ea-5497-4147-ab6d-5e352fda0d9f","Type":"ContainerDied","Data":"49070fb6464e3dabb3a8657e5dce704170f52641d47045a617799f32548598e3"} Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.128589 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.128623 4698 scope.go:117] "RemoveContainer" containerID="0cf18963bd797e184b883954078eb8d75002dc4c64f806d8dae9ff5cb2051adc" Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.128965 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="cinder-scheduler" containerID="cri-o://a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac" gracePeriod=30 Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.129030 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="probe" containerID="cri-o://68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc" gracePeriod=30 Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.161931 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.172798 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79c8598659-pm5vk"] Jan 27 14:53:29 crc kubenswrapper[4698]: I0127 14:53:29.172943 4698 scope.go:117] "RemoveContainer" containerID="da0152041a608c7d64094ceeec76b3e69a115d6aed8fb115eaf0a8b44b3b7819" Jan 27 14:53:30 crc kubenswrapper[4698]: I0127 14:53:30.138576 4698 generic.go:334] "Generic (PLEG): container finished" podID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerID="68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc" exitCode=0 Jan 27 14:53:30 crc kubenswrapper[4698]: I0127 14:53:30.138656 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerDied","Data":"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc"} Jan 27 14:53:30 crc kubenswrapper[4698]: I0127 14:53:30.142789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerStarted","Data":"0d43e88651461a75fb1662d6be38d49bdaf7671d9169326aa4118b6d5ab5dc0a"} Jan 27 14:53:30 crc kubenswrapper[4698]: I0127 14:53:30.407115 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:30 crc kubenswrapper[4698]: I0127 14:53:30.409032 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:53:30 crc kubenswrapper[4698]: E0127 14:53:30.409801 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.004007 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" path="/var/lib/kubelet/pods/9ee879ea-5497-4147-ab6d-5e352fda0d9f/volumes" Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.164477 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-central-agent" containerID="cri-o://e02ba54c5fb20416d1b43d931282eeb84845c9c93aa7cc4b398e82ee40fab7a2" gracePeriod=30 Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.164517 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.164524 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="proxy-httpd" containerID="cri-o://0d43e88651461a75fb1662d6be38d49bdaf7671d9169326aa4118b6d5ab5dc0a" gracePeriod=30 Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.164563 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="sg-core" containerID="cri-o://8053b92db1128e6835df685903c148fc0fda200225f265aa297962285ac49f38" gracePeriod=30 Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.164606 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-notification-agent" containerID="cri-o://16deaa9974830db681609fa953adb8dc360e1daed46981b8bdd43aecc059aa78" gracePeriod=30 Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.191287 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.911338269 podStartE2EDuration="13.191262914s" podCreationTimestamp="2026-01-27 14:53:18 +0000 UTC" firstStartedPulling="2026-01-27 14:53:19.892878632 +0000 UTC m=+1455.569656097" lastFinishedPulling="2026-01-27 14:53:29.172803267 +0000 UTC m=+1464.849580742" observedRunningTime="2026-01-27 14:53:31.186886168 +0000 UTC m=+1466.863663633" watchObservedRunningTime="2026-01-27 14:53:31.191262914 +0000 UTC m=+1466.868040379" Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.417009 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:53:31 crc kubenswrapper[4698]: I0127 14:53:31.984328 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064426 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064578 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064610 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x72wr\" (UniqueName: \"kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064737 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.064825 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data\") pod \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\" (UID: \"2be2eddc-2e24-4483-83a7-6a01aaae7f3c\") " Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.065338 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.069796 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts" (OuterVolumeSpecName: "scripts") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.070819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.072438 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr" (OuterVolumeSpecName: "kube-api-access-x72wr") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "kube-api-access-x72wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.116795 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.166871 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.166898 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.166908 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.166917 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.166927 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x72wr\" (UniqueName: \"kubernetes.io/projected/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-kube-api-access-x72wr\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.172009 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data" (OuterVolumeSpecName: "config-data") pod "2be2eddc-2e24-4483-83a7-6a01aaae7f3c" (UID: "2be2eddc-2e24-4483-83a7-6a01aaae7f3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.177680 4698 generic.go:334] "Generic (PLEG): container finished" podID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerID="a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac" exitCode=0 Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.177784 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerDied","Data":"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac"} Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.177839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2be2eddc-2e24-4483-83a7-6a01aaae7f3c","Type":"ContainerDied","Data":"284cd10d7719279fd7b24ce273afcd70d1b55b33872ebdcdf0de24b3c8105cfa"} Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.177859 4698 scope.go:117] "RemoveContainer" containerID="68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.177873 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.191034 4698 generic.go:334] "Generic (PLEG): container finished" podID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerID="0d43e88651461a75fb1662d6be38d49bdaf7671d9169326aa4118b6d5ab5dc0a" exitCode=0 Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.191071 4698 generic.go:334] "Generic (PLEG): container finished" podID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerID="8053b92db1128e6835df685903c148fc0fda200225f265aa297962285ac49f38" exitCode=2 Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.191084 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerDied","Data":"0d43e88651461a75fb1662d6be38d49bdaf7671d9169326aa4118b6d5ab5dc0a"} Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.191138 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerDied","Data":"8053b92db1128e6835df685903c148fc0fda200225f265aa297962285ac49f38"} Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.230880 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.245891 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.250138 4698 scope.go:117] "RemoveContainer" containerID="a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.260809 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261499 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api-log" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261517 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api-log" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261528 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261535 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261551 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="cinder-scheduler" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261557 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="cinder-scheduler" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261580 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="init" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261586 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="init" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261595 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="dnsmasq-dns" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261601 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="dnsmasq-dns" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.261618 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="probe" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261624 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="probe" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261804 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="probe" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261824 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api-log" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261835 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" containerName="cinder-api" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261844 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="dnsmasq-dns" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.261851 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" containerName="cinder-scheduler" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.262888 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.265625 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.265869 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zqtt2" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.265997 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.266120 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.268339 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be2eddc-2e24-4483-83a7-6a01aaae7f3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.282208 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.284144 4698 scope.go:117] "RemoveContainer" containerID="68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.288905 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc\": container with ID starting with 68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc not found: ID does not exist" containerID="68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.288953 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc"} err="failed to get container status \"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc\": rpc error: code = NotFound desc = could not find container \"68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc\": container with ID starting with 68fb188904ee09926cf389271d3909d3b78c280c84c5163351734183501f9bfc not found: ID does not exist" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.288978 4698 scope.go:117] "RemoveContainer" containerID="a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac" Jan 27 14:53:32 crc kubenswrapper[4698]: E0127 14:53:32.290256 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac\": container with ID starting with a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac not found: ID does not exist" containerID="a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.290315 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac"} err="failed to get container status \"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac\": rpc error: code = NotFound desc = could not find container \"a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac\": container with ID starting with a7632f58a9e0889a8b13f3a356007c2cbec1d84053a7bc08de801b897037ddac not found: ID does not exist" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ftz\" (UniqueName: \"kubernetes.io/projected/b295ba4f-27e2-4785-82ae-f266f9346576-kube-api-access-h2ftz\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369788 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369828 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369905 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369960 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b295ba4f-27e2-4785-82ae-f266f9346576-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.369977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-scripts\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ftz\" (UniqueName: \"kubernetes.io/projected/b295ba4f-27e2-4785-82ae-f266f9346576-kube-api-access-h2ftz\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471486 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b295ba4f-27e2-4785-82ae-f266f9346576-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-scripts\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.471990 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b295ba4f-27e2-4785-82ae-f266f9346576-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.476063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-scripts\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.476122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.477267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.477690 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b295ba4f-27e2-4785-82ae-f266f9346576-config-data\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.491542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ftz\" (UniqueName: \"kubernetes.io/projected/b295ba4f-27e2-4785-82ae-f266f9346576-kube-api-access-h2ftz\") pod \"cinder-scheduler-0\" (UID: \"b295ba4f-27e2-4785-82ae-f266f9346576\") " pod="openstack/cinder-scheduler-0" Jan 27 14:53:32 crc kubenswrapper[4698]: I0127 14:53:32.590576 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.005287 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be2eddc-2e24-4483-83a7-6a01aaae7f3c" path="/var/lib/kubelet/pods/2be2eddc-2e24-4483-83a7-6a01aaae7f3c/volumes" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.087564 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.202861 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a11616dd-8398-4c71-829f-1a389df9495f","Type":"ContainerStarted","Data":"30c957b54e769e0378c7024f27b1119d44f5b416d0608c7c8371a43bca155e6f"} Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.206219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b295ba4f-27e2-4785-82ae-f266f9346576","Type":"ContainerStarted","Data":"3962fef084ce190c81d7567a527e55c4b130d46ffd4d75849c60d36dc4e57751"} Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.210478 4698 generic.go:334] "Generic (PLEG): container finished" podID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerID="16deaa9974830db681609fa953adb8dc360e1daed46981b8bdd43aecc059aa78" exitCode=0 Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.210528 4698 generic.go:334] "Generic (PLEG): container finished" podID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerID="e02ba54c5fb20416d1b43d931282eeb84845c9c93aa7cc4b398e82ee40fab7a2" exitCode=0 Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.210554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerDied","Data":"16deaa9974830db681609fa953adb8dc360e1daed46981b8bdd43aecc059aa78"} Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.210589 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerDied","Data":"e02ba54c5fb20416d1b43d931282eeb84845c9c93aa7cc4b398e82ee40fab7a2"} Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.225199 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.929673365 podStartE2EDuration="32.225182516s" podCreationTimestamp="2026-01-27 14:53:01 +0000 UTC" firstStartedPulling="2026-01-27 14:53:02.398315957 +0000 UTC m=+1438.075093422" lastFinishedPulling="2026-01-27 14:53:31.693825108 +0000 UTC m=+1467.370602573" observedRunningTime="2026-01-27 14:53:33.220570485 +0000 UTC m=+1468.897347950" watchObservedRunningTime="2026-01-27 14:53:33.225182516 +0000 UTC m=+1468.901959981" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.240143 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.301985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgr86\" (UniqueName: \"kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302147 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302228 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302316 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd\") pod \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\" (UID: \"879f8b54-c8c9-4cbd-8b15-075571fa4bfb\") " Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302572 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.302807 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.303712 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.303753 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.306877 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86" (OuterVolumeSpecName: "kube-api-access-tgr86") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "kube-api-access-tgr86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.306946 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts" (OuterVolumeSpecName: "scripts") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.329814 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.387819 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.406212 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgr86\" (UniqueName: \"kubernetes.io/projected/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-kube-api-access-tgr86\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.406414 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.406424 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.406433 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.423479 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data" (OuterVolumeSpecName: "config-data") pod "879f8b54-c8c9-4cbd-8b15-075571fa4bfb" (UID: "879f8b54-c8c9-4cbd-8b15-075571fa4bfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.507789 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879f8b54-c8c9-4cbd-8b15-075571fa4bfb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:33 crc kubenswrapper[4698]: I0127 14:53:33.539986 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79c8598659-pm5vk" podUID="9ee879ea-5497-4147-ab6d-5e352fda0d9f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.231281 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.231290 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"879f8b54-c8c9-4cbd-8b15-075571fa4bfb","Type":"ContainerDied","Data":"97622ace0e032ad8dd91c63a4d60c121801d188b422fed63ccb55844ac7aa8e3"} Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.231361 4698 scope.go:117] "RemoveContainer" containerID="0d43e88651461a75fb1662d6be38d49bdaf7671d9169326aa4118b6d5ab5dc0a" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.236435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b295ba4f-27e2-4785-82ae-f266f9346576","Type":"ContainerStarted","Data":"5607cf3c3d3ae5bc7597e6781b745c189a5b2fa21dd9d346de0fdfbf396915b0"} Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.273786 4698 scope.go:117] "RemoveContainer" containerID="8053b92db1128e6835df685903c148fc0fda200225f265aa297962285ac49f38" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.287036 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.310358 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.336400 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:34 crc kubenswrapper[4698]: E0127 14:53:34.337288 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-notification-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.337315 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-notification-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: E0127 14:53:34.337356 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-central-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.337367 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-central-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: E0127 14:53:34.337390 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="proxy-httpd" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.337398 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="proxy-httpd" Jan 27 14:53:34 crc kubenswrapper[4698]: E0127 14:53:34.337444 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="sg-core" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.337458 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="sg-core" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.337850 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="proxy-httpd" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.339309 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-central-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.339349 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="sg-core" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.339377 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" containerName="ceilometer-notification-agent" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.346022 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.349316 4698 scope.go:117] "RemoveContainer" containerID="16deaa9974830db681609fa953adb8dc360e1daed46981b8bdd43aecc059aa78" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.349574 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.349706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.350024 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.354565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.411888 4698 scope.go:117] "RemoveContainer" containerID="e02ba54c5fb20416d1b43d931282eeb84845c9c93aa7cc4b398e82ee40fab7a2" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.429018 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.429390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.429599 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.429791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.430112 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.430267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.430712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.430959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4m5\" (UniqueName: \"kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.532783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4m5\" (UniqueName: \"kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.532851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.532876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.532923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.532958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.533045 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.533070 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.533144 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.533722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.533732 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.541902 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.542416 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.542488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.542438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.543037 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.557378 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4m5\" (UniqueName: \"kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5\") pod \"ceilometer-0\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " pod="openstack/ceilometer-0" Jan 27 14:53:34 crc kubenswrapper[4698]: I0127 14:53:34.670790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.005764 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879f8b54-c8c9-4cbd-8b15-075571fa4bfb" path="/var/lib/kubelet/pods/879f8b54-c8c9-4cbd-8b15-075571fa4bfb/volumes" Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.158858 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.250940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b295ba4f-27e2-4785-82ae-f266f9346576","Type":"ContainerStarted","Data":"da5da95e85204bb81611d0ff218003ffb5aaf3dfaab2eca418b66ef51579ec4c"} Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.255633 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerStarted","Data":"5d00e8c808c93d09ca6da01053d8cd1e06a2aa400b1fb7fbc48905abf71aa302"} Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.281050 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.281028907 podStartE2EDuration="3.281028907s" podCreationTimestamp="2026-01-27 14:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:35.276856317 +0000 UTC m=+1470.953633782" watchObservedRunningTime="2026-01-27 14:53:35.281028907 +0000 UTC m=+1470.957806372" Jan 27 14:53:35 crc kubenswrapper[4698]: I0127 14:53:35.421921 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:53:37 crc kubenswrapper[4698]: I0127 14:53:37.591296 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:53:37 crc kubenswrapper[4698]: I0127 14:53:37.927711 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hsmhb"] Jan 27 14:53:37 crc kubenswrapper[4698]: I0127 14:53:37.929352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:37 crc kubenswrapper[4698]: I0127 14:53:37.960338 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hsmhb"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.019642 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.019878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmgwk\" (UniqueName: \"kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.124865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmgwk\" (UniqueName: \"kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.125064 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.126063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.134683 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w54fd"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.136311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.151291 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8b6c-account-create-update-g9bb7"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.153000 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.157120 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.158167 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmgwk\" (UniqueName: \"kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk\") pod \"nova-api-db-create-hsmhb\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.168754 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8b6c-account-create-update-g9bb7"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.178576 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w54fd"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.227440 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc94b\" (UniqueName: \"kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.227786 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.227965 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxmh\" (UniqueName: \"kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.227994 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.234694 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dm2q4"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.237040 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.247674 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dm2q4"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.321960 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerStarted","Data":"8c1a7ca0e247b522e5cdb3cfdebacae0b0e0525cc8bf2e3b0ce1e8821d1655cd"} Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.322026 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerStarted","Data":"fa9b2b96b9c86a0ff4340994d597efd65ba978ea17a7e516639a2cc872a3d7c7"} Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxmh\" (UniqueName: \"kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331756 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331815 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnvjw\" (UniqueName: \"kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.331871 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc94b\" (UniqueName: \"kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.333046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.333271 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bd6a-account-create-update-k79xt"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.333475 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.334666 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.336953 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.354275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxmh\" (UniqueName: \"kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh\") pod \"nova-api-8b6c-account-create-update-g9bb7\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.354297 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bd6a-account-create-update-k79xt"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.368117 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc94b\" (UniqueName: \"kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b\") pod \"nova-cell0-db-create-w54fd\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.401514 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.433172 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmbn\" (UniqueName: \"kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.433252 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.433328 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnvjw\" (UniqueName: \"kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.433392 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.434426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.458037 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnvjw\" (UniqueName: \"kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw\") pod \"nova-cell1-db-create-dm2q4\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.531034 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.534001 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6246-account-create-update-vh2pj"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.536040 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.536775 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmbn\" (UniqueName: \"kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.536897 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.538065 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.544821 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.547358 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.566504 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmbn\" (UniqueName: \"kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn\") pod \"nova-cell0-bd6a-account-create-update-k79xt\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.583519 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.592412 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6246-account-create-update-vh2pj"] Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.640752 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwh4\" (UniqueName: \"kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.640806 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.655008 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.743365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwh4\" (UniqueName: \"kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.743420 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.744368 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.773962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwh4\" (UniqueName: \"kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4\") pod \"nova-cell1-6246-account-create-update-vh2pj\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:38 crc kubenswrapper[4698]: I0127 14:53:38.884565 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.018231 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hsmhb"] Jan 27 14:53:39 crc kubenswrapper[4698]: W0127 14:53:39.071578 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod764e47ae_dc1d_47fd_a528_c2c4d6b672b6.slice/crio-2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43 WatchSource:0}: Error finding container 2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43: Status 404 returned error can't find the container with id 2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43 Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.272423 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8b6c-account-create-update-g9bb7"] Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.350819 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" event={"ID":"009c9cd0-9c21-4d68-b1c0-8041ec2fc475","Type":"ContainerStarted","Data":"e85db3f61de0817032dd2c24a61b066d8456d34dbe65910b6ca09d10ef009a38"} Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.361017 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hsmhb" event={"ID":"764e47ae-dc1d-47fd-a528-c2c4d6b672b6","Type":"ContainerStarted","Data":"2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43"} Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.365714 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bd6a-account-create-update-k79xt"] Jan 27 14:53:39 crc kubenswrapper[4698]: W0127 14:53:39.372864 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f32c91_43ba_4123_bdd2_ee188ea6b9b1.slice/crio-daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5 WatchSource:0}: Error finding container daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5: Status 404 returned error can't find the container with id daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5 Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.380017 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w54fd"] Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.534686 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dm2q4"] Jan 27 14:53:39 crc kubenswrapper[4698]: I0127 14:53:39.676780 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6246-account-create-update-vh2pj"] Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.372194 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dm2q4" event={"ID":"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2","Type":"ContainerStarted","Data":"a9ae0d3c76bcfaf8208e378d07391f60604654a9ac8d22ca1c1f582c25730434"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.372554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dm2q4" event={"ID":"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2","Type":"ContainerStarted","Data":"90441d04fed6c7ed999576bc82947510ba3fd4a14402953e002724913d38b8ef"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.374705 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerStarted","Data":"803bcc8902061b8be6f373cfd4a23c7f25654dd00f4f7b97442402ab10910d08"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.377595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" event={"ID":"43111939-6107-4401-b6d6-94265dc21574","Type":"ContainerStarted","Data":"cb2a6af22964f39be2be6e60353451c3e17d6edbcf4dbcafd8fac08e4a9011f2"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.377654 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" event={"ID":"43111939-6107-4401-b6d6-94265dc21574","Type":"ContainerStarted","Data":"97fcb677c8f6b752172c381aecea76b214d4d698c6d1c69aff26779310e71e23"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.379242 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w54fd" event={"ID":"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1","Type":"ContainerStarted","Data":"b6b8e0715c07a41963e565990299e55a8c9d2831a103fb08278ca746627a2b52"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.379277 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w54fd" event={"ID":"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1","Type":"ContainerStarted","Data":"daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.381186 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" event={"ID":"009c9cd0-9c21-4d68-b1c0-8041ec2fc475","Type":"ContainerStarted","Data":"e4b8822cb5a44281ad0cbb064c96553a13c8d0c5b595f8a4f5fdefd6a848ec65"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.383751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" event={"ID":"009fd100-fc78-40e8-8e85-2c2b14b22e9e","Type":"ContainerStarted","Data":"84188c815bbd762778270062801dd0b0bf4c8fd44ef1c5c487edefa5e4342e0f"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.388973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hsmhb" event={"ID":"764e47ae-dc1d-47fd-a528-c2c4d6b672b6","Type":"ContainerStarted","Data":"85d889f5b0f3054591c2b7e9106ff936667404824318bebf2f46bee57210ea49"} Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.406823 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.406915 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.407568 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:53:40 crc kubenswrapper[4698]: E0127 14:53:40.407864 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.415193 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hsmhb" podStartSLOduration=3.415167934 podStartE2EDuration="3.415167934s" podCreationTimestamp="2026-01-27 14:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:40.411071956 +0000 UTC m=+1476.087849441" watchObservedRunningTime="2026-01-27 14:53:40.415167934 +0000 UTC m=+1476.091945399" Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.416513 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" podStartSLOduration=2.416508279 podStartE2EDuration="2.416508279s" podCreationTimestamp="2026-01-27 14:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:40.3964671 +0000 UTC m=+1476.073244585" watchObservedRunningTime="2026-01-27 14:53:40.416508279 +0000 UTC m=+1476.093285744" Jan 27 14:53:40 crc kubenswrapper[4698]: I0127 14:53:40.444528 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" podStartSLOduration=2.444502956 podStartE2EDuration="2.444502956s" podCreationTimestamp="2026-01-27 14:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:40.435525119 +0000 UTC m=+1476.112302584" watchObservedRunningTime="2026-01-27 14:53:40.444502956 +0000 UTC m=+1476.121280421" Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.401503 4698 generic.go:334] "Generic (PLEG): container finished" podID="62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" containerID="a9ae0d3c76bcfaf8208e378d07391f60604654a9ac8d22ca1c1f582c25730434" exitCode=0 Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.401680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dm2q4" event={"ID":"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2","Type":"ContainerDied","Data":"a9ae0d3c76bcfaf8208e378d07391f60604654a9ac8d22ca1c1f582c25730434"} Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.407171 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" event={"ID":"009fd100-fc78-40e8-8e85-2c2b14b22e9e","Type":"ContainerStarted","Data":"6046701ae2a416bf70ff30db0bc5f23fd015eaab03590656ae2cbeb34ded580a"} Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.408242 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:53:41 crc kubenswrapper[4698]: E0127 14:53:41.408658 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(4129eb47-beba-4bec-8cb2-59818e8908a5)\"" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.463912 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" podStartSLOduration=3.463624584 podStartE2EDuration="3.463624584s" podCreationTimestamp="2026-01-27 14:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:41.450936109 +0000 UTC m=+1477.127713575" watchObservedRunningTime="2026-01-27 14:53:41.463624584 +0000 UTC m=+1477.140402059" Jan 27 14:53:41 crc kubenswrapper[4698]: I0127 14:53:41.486914 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-w54fd" podStartSLOduration=3.486891297 podStartE2EDuration="3.486891297s" podCreationTimestamp="2026-01-27 14:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:53:41.468492972 +0000 UTC m=+1477.145270437" watchObservedRunningTime="2026-01-27 14:53:41.486891297 +0000 UTC m=+1477.163668762" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.419426 4698 generic.go:334] "Generic (PLEG): container finished" podID="009fd100-fc78-40e8-8e85-2c2b14b22e9e" containerID="6046701ae2a416bf70ff30db0bc5f23fd015eaab03590656ae2cbeb34ded580a" exitCode=0 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.419635 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" event={"ID":"009fd100-fc78-40e8-8e85-2c2b14b22e9e","Type":"ContainerDied","Data":"6046701ae2a416bf70ff30db0bc5f23fd015eaab03590656ae2cbeb34ded580a"} Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.421456 4698 generic.go:334] "Generic (PLEG): container finished" podID="764e47ae-dc1d-47fd-a528-c2c4d6b672b6" containerID="85d889f5b0f3054591c2b7e9106ff936667404824318bebf2f46bee57210ea49" exitCode=0 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.421533 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hsmhb" event={"ID":"764e47ae-dc1d-47fd-a528-c2c4d6b672b6","Type":"ContainerDied","Data":"85d889f5b0f3054591c2b7e9106ff936667404824318bebf2f46bee57210ea49"} Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerStarted","Data":"ddb68fd0f13de6ebf6fdd3b98e868a38ef8b8d7567cff49fe19ed55864980e82"} Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427396 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-central-agent" containerID="cri-o://fa9b2b96b9c86a0ff4340994d597efd65ba978ea17a7e516639a2cc872a3d7c7" gracePeriod=30 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427510 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427521 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="sg-core" containerID="cri-o://803bcc8902061b8be6f373cfd4a23c7f25654dd00f4f7b97442402ab10910d08" gracePeriod=30 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427577 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="proxy-httpd" containerID="cri-o://ddb68fd0f13de6ebf6fdd3b98e868a38ef8b8d7567cff49fe19ed55864980e82" gracePeriod=30 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.427622 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-notification-agent" containerID="cri-o://8c1a7ca0e247b522e5cdb3cfdebacae0b0e0525cc8bf2e3b0ce1e8821d1655cd" gracePeriod=30 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.433859 4698 generic.go:334] "Generic (PLEG): container finished" podID="f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" containerID="b6b8e0715c07a41963e565990299e55a8c9d2831a103fb08278ca746627a2b52" exitCode=0 Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.433950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w54fd" event={"ID":"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1","Type":"ContainerDied","Data":"b6b8e0715c07a41963e565990299e55a8c9d2831a103fb08278ca746627a2b52"} Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.517820 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.761476458 podStartE2EDuration="8.517800365s" podCreationTimestamp="2026-01-27 14:53:34 +0000 UTC" firstStartedPulling="2026-01-27 14:53:35.164085587 +0000 UTC m=+1470.840863052" lastFinishedPulling="2026-01-27 14:53:41.920409494 +0000 UTC m=+1477.597186959" observedRunningTime="2026-01-27 14:53:42.511624623 +0000 UTC m=+1478.188402088" watchObservedRunningTime="2026-01-27 14:53:42.517800365 +0000 UTC m=+1478.194577840" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.761301 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.880132 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.938666 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnvjw\" (UniqueName: \"kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw\") pod \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.938799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts\") pod \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\" (UID: \"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2\") " Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.939886 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" (UID: "62ee2c69-0404-4a33-9a9e-9198c5f6bfa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.940469 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:42 crc kubenswrapper[4698]: I0127 14:53:42.947426 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw" (OuterVolumeSpecName: "kube-api-access-nnvjw") pod "62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" (UID: "62ee2c69-0404-4a33-9a9e-9198c5f6bfa2"). InnerVolumeSpecName "kube-api-access-nnvjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.043318 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnvjw\" (UniqueName: \"kubernetes.io/projected/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2-kube-api-access-nnvjw\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.445337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dm2q4" event={"ID":"62ee2c69-0404-4a33-9a9e-9198c5f6bfa2","Type":"ContainerDied","Data":"90441d04fed6c7ed999576bc82947510ba3fd4a14402953e002724913d38b8ef"} Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.446390 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90441d04fed6c7ed999576bc82947510ba3fd4a14402953e002724913d38b8ef" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.445379 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dm2q4" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.451330 4698 generic.go:334] "Generic (PLEG): container finished" podID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerID="803bcc8902061b8be6f373cfd4a23c7f25654dd00f4f7b97442402ab10910d08" exitCode=2 Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.451373 4698 generic.go:334] "Generic (PLEG): container finished" podID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerID="8c1a7ca0e247b522e5cdb3cfdebacae0b0e0525cc8bf2e3b0ce1e8821d1655cd" exitCode=0 Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.451425 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerDied","Data":"803bcc8902061b8be6f373cfd4a23c7f25654dd00f4f7b97442402ab10910d08"} Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.451496 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerDied","Data":"8c1a7ca0e247b522e5cdb3cfdebacae0b0e0525cc8bf2e3b0ce1e8821d1655cd"} Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.453513 4698 generic.go:334] "Generic (PLEG): container finished" podID="43111939-6107-4401-b6d6-94265dc21574" containerID="cb2a6af22964f39be2be6e60353451c3e17d6edbcf4dbcafd8fac08e4a9011f2" exitCode=0 Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.453653 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" event={"ID":"43111939-6107-4401-b6d6-94265dc21574","Type":"ContainerDied","Data":"cb2a6af22964f39be2be6e60353451c3e17d6edbcf4dbcafd8fac08e4a9011f2"} Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.457086 4698 generic.go:334] "Generic (PLEG): container finished" podID="009c9cd0-9c21-4d68-b1c0-8041ec2fc475" containerID="e4b8822cb5a44281ad0cbb064c96553a13c8d0c5b595f8a4f5fdefd6a848ec65" exitCode=0 Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.457241 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" event={"ID":"009c9cd0-9c21-4d68-b1c0-8041ec2fc475","Type":"ContainerDied","Data":"e4b8822cb5a44281ad0cbb064c96553a13c8d0c5b595f8a4f5fdefd6a848ec65"} Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.848672 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.970902 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts\") pod \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.970983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmgwk\" (UniqueName: \"kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk\") pod \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\" (UID: \"764e47ae-dc1d-47fd-a528-c2c4d6b672b6\") " Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.972743 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "764e47ae-dc1d-47fd-a528-c2c4d6b672b6" (UID: "764e47ae-dc1d-47fd-a528-c2c4d6b672b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:43 crc kubenswrapper[4698]: I0127 14:53:43.981037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk" (OuterVolumeSpecName: "kube-api-access-xmgwk") pod "764e47ae-dc1d-47fd-a528-c2c4d6b672b6" (UID: "764e47ae-dc1d-47fd-a528-c2c4d6b672b6"). InnerVolumeSpecName "kube-api-access-xmgwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.073687 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.073730 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmgwk\" (UniqueName: \"kubernetes.io/projected/764e47ae-dc1d-47fd-a528-c2c4d6b672b6-kube-api-access-xmgwk\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.083596 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.091321 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.175299 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwh4\" (UniqueName: \"kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4\") pod \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.175348 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts\") pod \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\" (UID: \"009fd100-fc78-40e8-8e85-2c2b14b22e9e\") " Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.175447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc94b\" (UniqueName: \"kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b\") pod \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.175626 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts\") pod \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\" (UID: \"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1\") " Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.176348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "009fd100-fc78-40e8-8e85-2c2b14b22e9e" (UID: "009fd100-fc78-40e8-8e85-2c2b14b22e9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.176483 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" (UID: "f0f32c91-43ba-4123-bdd2-ee188ea6b9b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.176770 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.176858 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009fd100-fc78-40e8-8e85-2c2b14b22e9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.179776 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b" (OuterVolumeSpecName: "kube-api-access-dc94b") pod "f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" (UID: "f0f32c91-43ba-4123-bdd2-ee188ea6b9b1"). InnerVolumeSpecName "kube-api-access-dc94b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.191901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4" (OuterVolumeSpecName: "kube-api-access-mrwh4") pod "009fd100-fc78-40e8-8e85-2c2b14b22e9e" (UID: "009fd100-fc78-40e8-8e85-2c2b14b22e9e"). InnerVolumeSpecName "kube-api-access-mrwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.280025 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwh4\" (UniqueName: \"kubernetes.io/projected/009fd100-fc78-40e8-8e85-2c2b14b22e9e-kube-api-access-mrwh4\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.280071 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc94b\" (UniqueName: \"kubernetes.io/projected/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1-kube-api-access-dc94b\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.475246 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hsmhb" event={"ID":"764e47ae-dc1d-47fd-a528-c2c4d6b672b6","Type":"ContainerDied","Data":"2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43"} Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.475296 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hsmhb" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.475329 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc935cedf3eb4625580c2443f21d241e7c50213bb785f797b4c921f9ad68f43" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.477603 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w54fd" event={"ID":"f0f32c91-43ba-4123-bdd2-ee188ea6b9b1","Type":"ContainerDied","Data":"daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5"} Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.477661 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daea7b8f8c802f9361ca378aa2405a8992be858bf330f897f0836f123f48a2e5" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.477742 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w54fd" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.486656 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.486674 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6246-account-create-update-vh2pj" event={"ID":"009fd100-fc78-40e8-8e85-2c2b14b22e9e","Type":"ContainerDied","Data":"84188c815bbd762778270062801dd0b0bf4c8fd44ef1c5c487edefa5e4342e0f"} Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.486724 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84188c815bbd762778270062801dd0b0bf4c8fd44ef1c5c487edefa5e4342e0f" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.850748 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:44 crc kubenswrapper[4698]: I0127 14:53:44.931926 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.005596 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts\") pod \"43111939-6107-4401-b6d6-94265dc21574\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.005784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nmbn\" (UniqueName: \"kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn\") pod \"43111939-6107-4401-b6d6-94265dc21574\" (UID: \"43111939-6107-4401-b6d6-94265dc21574\") " Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.005840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxmh\" (UniqueName: \"kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh\") pod \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.005881 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts\") pod \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\" (UID: \"009c9cd0-9c21-4d68-b1c0-8041ec2fc475\") " Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.006372 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43111939-6107-4401-b6d6-94265dc21574" (UID: "43111939-6107-4401-b6d6-94265dc21574"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.007109 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "009c9cd0-9c21-4d68-b1c0-8041ec2fc475" (UID: "009c9cd0-9c21-4d68-b1c0-8041ec2fc475"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.011007 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn" (OuterVolumeSpecName: "kube-api-access-2nmbn") pod "43111939-6107-4401-b6d6-94265dc21574" (UID: "43111939-6107-4401-b6d6-94265dc21574"). InnerVolumeSpecName "kube-api-access-2nmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.017220 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh" (OuterVolumeSpecName: "kube-api-access-vdxmh") pod "009c9cd0-9c21-4d68-b1c0-8041ec2fc475" (UID: "009c9cd0-9c21-4d68-b1c0-8041ec2fc475"). InnerVolumeSpecName "kube-api-access-vdxmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.110536 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43111939-6107-4401-b6d6-94265dc21574-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.110869 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nmbn\" (UniqueName: \"kubernetes.io/projected/43111939-6107-4401-b6d6-94265dc21574-kube-api-access-2nmbn\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.110885 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxmh\" (UniqueName: \"kubernetes.io/projected/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-kube-api-access-vdxmh\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.110916 4698 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009c9cd0-9c21-4d68-b1c0-8041ec2fc475-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.497615 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.497580 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8b6c-account-create-update-g9bb7" event={"ID":"009c9cd0-9c21-4d68-b1c0-8041ec2fc475","Type":"ContainerDied","Data":"e85db3f61de0817032dd2c24a61b066d8456d34dbe65910b6ca09d10ef009a38"} Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.498594 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85db3f61de0817032dd2c24a61b066d8456d34dbe65910b6ca09d10ef009a38" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.501309 4698 generic.go:334] "Generic (PLEG): container finished" podID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerID="fa9b2b96b9c86a0ff4340994d597efd65ba978ea17a7e516639a2cc872a3d7c7" exitCode=0 Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.501395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerDied","Data":"fa9b2b96b9c86a0ff4340994d597efd65ba978ea17a7e516639a2cc872a3d7c7"} Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.503284 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" event={"ID":"43111939-6107-4401-b6d6-94265dc21574","Type":"ContainerDied","Data":"97fcb677c8f6b752172c381aecea76b214d4d698c6d1c69aff26779310e71e23"} Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.503325 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bd6a-account-create-update-k79xt" Jan 27 14:53:45 crc kubenswrapper[4698]: I0127 14:53:45.503339 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fcb677c8f6b752172c381aecea76b214d4d698c6d1c69aff26779310e71e23" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.762384 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x8m8s"] Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763279 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009fd100-fc78-40e8-8e85-2c2b14b22e9e" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763294 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="009fd100-fc78-40e8-8e85-2c2b14b22e9e" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763320 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009c9cd0-9c21-4d68-b1c0-8041ec2fc475" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763329 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="009c9cd0-9c21-4d68-b1c0-8041ec2fc475" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763353 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e47ae-dc1d-47fd-a528-c2c4d6b672b6" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763361 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e47ae-dc1d-47fd-a528-c2c4d6b672b6" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763371 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763378 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763399 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43111939-6107-4401-b6d6-94265dc21574" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763407 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="43111939-6107-4401-b6d6-94265dc21574" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: E0127 14:53:48.763419 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763426 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763701 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="009c9cd0-9c21-4d68-b1c0-8041ec2fc475" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763719 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763737 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="43111939-6107-4401-b6d6-94265dc21574" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763748 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="764e47ae-dc1d-47fd-a528-c2c4d6b672b6" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763760 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" containerName="mariadb-database-create" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.763772 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="009fd100-fc78-40e8-8e85-2c2b14b22e9e" containerName="mariadb-account-create-update" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.764507 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.767423 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.767769 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.768559 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8hxxg" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.774153 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x8m8s"] Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.908969 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.909032 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpffl\" (UniqueName: \"kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.909118 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:48 crc kubenswrapper[4698]: I0127 14:53:48.909136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.011551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.011942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpffl\" (UniqueName: \"kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.012290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.012448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.019421 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.019518 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.023359 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.049731 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpffl\" (UniqueName: \"kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl\") pod \"nova-cell0-conductor-db-sync-x8m8s\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.089785 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:53:49 crc kubenswrapper[4698]: I0127 14:53:49.548157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x8m8s"] Jan 27 14:53:50 crc kubenswrapper[4698]: I0127 14:53:50.552779 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" event={"ID":"9a268a3b-da75-4a08-a9a3-b097f2066a27","Type":"ContainerStarted","Data":"506084962b08a4bae14c524f6ed39129e2d9a7134a5a6e1a6dba43a63eced799"} Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.073116 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod75bcd64d-b81b-456e-b9e6-1f26a52942d9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod75bcd64d-b81b-456e-b9e6-1f26a52942d9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod75bcd64d_b81b_456e_b9e6_1f26a52942d9.slice" Jan 27 14:53:53 crc kubenswrapper[4698]: E0127 14:53:53.073448 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod75bcd64d-b81b-456e-b9e6-1f26a52942d9] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod75bcd64d-b81b-456e-b9e6-1f26a52942d9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod75bcd64d_b81b_456e_b9e6_1f26a52942d9.slice" pod="openstack/cinder-api-0" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.591806 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.620790 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.645805 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.668565 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.670719 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.674613 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.674807 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.674868 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.693738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819443 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnl25\" (UniqueName: \"kubernetes.io/projected/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-kube-api-access-gnl25\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819500 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819601 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819683 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819751 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819787 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819843 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-scripts\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.819867 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-logs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922408 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922534 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922582 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922604 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922661 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-scripts\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-logs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnl25\" (UniqueName: \"kubernetes.io/projected/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-kube-api-access-gnl25\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.922868 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.923439 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.924329 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-logs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.933976 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.934034 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.935440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.935557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-scripts\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.939626 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.943757 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-config-data\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:53 crc kubenswrapper[4698]: I0127 14:53:53.945216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnl25\" (UniqueName: \"kubernetes.io/projected/2b66b3ef-b534-4fc7-ab88-7d6d6d971f26-kube-api-access-gnl25\") pod \"cinder-api-0\" (UID: \"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26\") " pod="openstack/cinder-api-0" Jan 27 14:53:54 crc kubenswrapper[4698]: I0127 14:53:54.000268 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:53:54 crc kubenswrapper[4698]: I0127 14:53:54.456035 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:53:54 crc kubenswrapper[4698]: I0127 14:53:54.604439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26","Type":"ContainerStarted","Data":"6adda2e871268c2791411b0f6ec3bac5a1741a45df2da8eb4dcd4a69db4892fc"} Jan 27 14:53:55 crc kubenswrapper[4698]: I0127 14:53:55.022537 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bcd64d-b81b-456e-b9e6-1f26a52942d9" path="/var/lib/kubelet/pods/75bcd64d-b81b-456e-b9e6-1f26a52942d9/volumes" Jan 27 14:53:55 crc kubenswrapper[4698]: I0127 14:53:55.029488 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:53:55 crc kubenswrapper[4698]: I0127 14:53:55.616001 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26","Type":"ContainerStarted","Data":"1e1f6a6ce3d9400f1f3b6d86fb06b2f6dfa05b738ed57eb469ecaf01567ba57b"} Jan 27 14:53:56 crc kubenswrapper[4698]: I0127 14:53:56.627283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerStarted","Data":"d3a9953d8401c424c58552b0b4f6574236a5890e4ac37880b086ac4ad400f15a"} Jan 27 14:54:00 crc kubenswrapper[4698]: I0127 14:54:00.407202 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:00 crc kubenswrapper[4698]: I0127 14:54:00.443441 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:00 crc kubenswrapper[4698]: I0127 14:54:00.667605 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:00 crc kubenswrapper[4698]: I0127 14:54:00.696436 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:04 crc kubenswrapper[4698]: I0127 14:54:04.677774 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:54:12 crc kubenswrapper[4698]: I0127 14:54:12.787360 4698 generic.go:334] "Generic (PLEG): container finished" podID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerID="ddb68fd0f13de6ebf6fdd3b98e868a38ef8b8d7567cff49fe19ed55864980e82" exitCode=137 Jan 27 14:54:12 crc kubenswrapper[4698]: I0127 14:54:12.787428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerDied","Data":"ddb68fd0f13de6ebf6fdd3b98e868a38ef8b8d7567cff49fe19ed55864980e82"} Jan 27 14:54:13 crc kubenswrapper[4698]: I0127 14:54:13.804157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b66b3ef-b534-4fc7-ab88-7d6d6d971f26","Type":"ContainerStarted","Data":"44807c0849e9a04b5776feec7a7199c4f019fc285dad8c96240482b07c507ff0"} Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.005396 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4m5\" (UniqueName: \"kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166418 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166499 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166764 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166822 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts\") pod \"4507e893-5bc9-43a4-8cb6-d31622f201e7\" (UID: \"4507e893-5bc9-43a4-8cb6-d31622f201e7\") " Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.166982 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.167509 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.167875 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.172690 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5" (OuterVolumeSpecName: "kube-api-access-8p4m5") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "kube-api-access-8p4m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.177019 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts" (OuterVolumeSpecName: "scripts") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.220942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.241747 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.251261 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271323 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271388 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271404 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271418 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271458 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4m5\" (UniqueName: \"kubernetes.io/projected/4507e893-5bc9-43a4-8cb6-d31622f201e7-kube-api-access-8p4m5\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.271472 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4507e893-5bc9-43a4-8cb6-d31622f201e7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.288872 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data" (OuterVolumeSpecName: "config-data") pod "4507e893-5bc9-43a4-8cb6-d31622f201e7" (UID: "4507e893-5bc9-43a4-8cb6-d31622f201e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.373397 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4507e893-5bc9-43a4-8cb6-d31622f201e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.836615 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4507e893-5bc9-43a4-8cb6-d31622f201e7","Type":"ContainerDied","Data":"5d00e8c808c93d09ca6da01053d8cd1e06a2aa400b1fb7fbc48905abf71aa302"} Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.836725 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.837304 4698 scope.go:117] "RemoveContainer" containerID="ddb68fd0f13de6ebf6fdd3b98e868a38ef8b8d7567cff49fe19ed55864980e82" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.862486 4698 scope.go:117] "RemoveContainer" containerID="803bcc8902061b8be6f373cfd4a23c7f25654dd00f4f7b97442402ab10910d08" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.878358 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.888002 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.892085 4698 scope.go:117] "RemoveContainer" containerID="8c1a7ca0e247b522e5cdb3cfdebacae0b0e0525cc8bf2e3b0ce1e8821d1655cd" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.906372 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:16 crc kubenswrapper[4698]: E0127 14:54:16.906761 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="proxy-httpd" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.906798 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="proxy-httpd" Jan 27 14:54:16 crc kubenswrapper[4698]: E0127 14:54:16.906826 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-notification-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.906834 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-notification-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: E0127 14:54:16.906844 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-central-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.906850 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-central-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: E0127 14:54:16.906870 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="sg-core" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.906877 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="sg-core" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.907056 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="sg-core" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.907075 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-central-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.907088 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="proxy-httpd" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.907099 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" containerName="ceilometer-notification-agent" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.908708 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.913481 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.913710 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.914060 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.955530 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:16 crc kubenswrapper[4698]: I0127 14:54:16.959693 4698 scope.go:117] "RemoveContainer" containerID="fa9b2b96b9c86a0ff4340994d597efd65ba978ea17a7e516639a2cc872a3d7c7" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.007583 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4507e893-5bc9-43a4-8cb6-d31622f201e7" path="/var/lib/kubelet/pods/4507e893-5bc9-43a4-8cb6-d31622f201e7/volumes" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.092407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.092536 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.092733 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.092904 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.092979 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.093028 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.093070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.093224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rcw\" (UniqueName: \"kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.195725 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.195865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.195953 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.195997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.196031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.196077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.196165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6rcw\" (UniqueName: \"kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.196270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.197216 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.197531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.201846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.202602 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.202629 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.203253 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.204576 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.217702 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6rcw\" (UniqueName: \"kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw\") pod \"ceilometer-0\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.257815 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:17 crc kubenswrapper[4698]: E0127 14:54:17.450322 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Jan 27 14:54:17 crc kubenswrapper[4698]: E0127 14:54:17.450701 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Jan 27 14:54:17 crc kubenswrapper[4698]: E0127 14:54:17.450843 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:38.102.83.111:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpffl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-x8m8s_openstack(9a268a3b-da75-4a08-a9a3-b097f2066a27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:54:17 crc kubenswrapper[4698]: E0127 14:54:17.452199 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.782276 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.848978 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerStarted","Data":"fff044136ac7549a1425696ad631e3d081cade07bd85574ac94599fd65bc8057"} Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.851073 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:54:17 crc kubenswrapper[4698]: E0127 14:54:17.853604 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" Jan 27 14:54:17 crc kubenswrapper[4698]: I0127 14:54:17.900616 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=24.900597035 podStartE2EDuration="24.900597035s" podCreationTimestamp="2026-01-27 14:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:17.875925016 +0000 UTC m=+1513.552702501" watchObservedRunningTime="2026-01-27 14:54:17.900597035 +0000 UTC m=+1513.577374500" Jan 27 14:54:18 crc kubenswrapper[4698]: I0127 14:54:18.870090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerStarted","Data":"c40e7ffe7d02b89f1cb9b4b3cb981c379719f73d33f5733cfd8bf8440fd3e88c"} Jan 27 14:54:18 crc kubenswrapper[4698]: I0127 14:54:18.870663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerStarted","Data":"d135360ba59b9db02a9b725bd9d1375367a9fdde4a506a22f41c92715a90eea9"} Jan 27 14:54:19 crc kubenswrapper[4698]: I0127 14:54:19.651242 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:19 crc kubenswrapper[4698]: I0127 14:54:19.882721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerStarted","Data":"0f8af07917b9b7cbb1731913ec10910b586814e8eb6b8c6ca216a2a0a625b4d0"} Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915025 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerStarted","Data":"bb921dbd95bef7095b6e74171b60ea601d8550846f0e8b3b8d0a354e01678adc"} Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915627 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915398 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="proxy-httpd" containerID="cri-o://bb921dbd95bef7095b6e74171b60ea601d8550846f0e8b3b8d0a354e01678adc" gracePeriod=30 Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915386 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="sg-core" containerID="cri-o://0f8af07917b9b7cbb1731913ec10910b586814e8eb6b8c6ca216a2a0a625b4d0" gracePeriod=30 Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915485 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-notification-agent" containerID="cri-o://c40e7ffe7d02b89f1cb9b4b3cb981c379719f73d33f5733cfd8bf8440fd3e88c" gracePeriod=30 Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.915330 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-central-agent" containerID="cri-o://d135360ba59b9db02a9b725bd9d1375367a9fdde4a506a22f41c92715a90eea9" gracePeriod=30 Jan 27 14:54:21 crc kubenswrapper[4698]: I0127 14:54:21.951701 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.897152759 podStartE2EDuration="5.95167885s" podCreationTimestamp="2026-01-27 14:54:16 +0000 UTC" firstStartedPulling="2026-01-27 14:54:17.777425921 +0000 UTC m=+1513.454203386" lastFinishedPulling="2026-01-27 14:54:20.831952012 +0000 UTC m=+1516.508729477" observedRunningTime="2026-01-27 14:54:21.940593448 +0000 UTC m=+1517.617370913" watchObservedRunningTime="2026-01-27 14:54:21.95167885 +0000 UTC m=+1517.628456315" Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.548432 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.926920 4698 generic.go:334] "Generic (PLEG): container finished" podID="ac731464-5c12-4dd8-81b2-011463c78945" containerID="bb921dbd95bef7095b6e74171b60ea601d8550846f0e8b3b8d0a354e01678adc" exitCode=0 Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.926950 4698 generic.go:334] "Generic (PLEG): container finished" podID="ac731464-5c12-4dd8-81b2-011463c78945" containerID="0f8af07917b9b7cbb1731913ec10910b586814e8eb6b8c6ca216a2a0a625b4d0" exitCode=2 Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.926958 4698 generic.go:334] "Generic (PLEG): container finished" podID="ac731464-5c12-4dd8-81b2-011463c78945" containerID="c40e7ffe7d02b89f1cb9b4b3cb981c379719f73d33f5733cfd8bf8440fd3e88c" exitCode=0 Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.926978 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerDied","Data":"bb921dbd95bef7095b6e74171b60ea601d8550846f0e8b3b8d0a354e01678adc"} Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.927003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerDied","Data":"0f8af07917b9b7cbb1731913ec10910b586814e8eb6b8c6ca216a2a0a625b4d0"} Jan 27 14:54:22 crc kubenswrapper[4698]: I0127 14:54:22.927014 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerDied","Data":"c40e7ffe7d02b89f1cb9b4b3cb981c379719f73d33f5733cfd8bf8440fd3e88c"} Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.034923 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.036063 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-log" containerID="cri-o://f475ddfd2dced71257c2298994783a72cc02be3fe5f424507a519767978147be" gracePeriod=30 Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.036604 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-httpd" containerID="cri-o://12a57213ca737ce5381887b439702260fc2192b36b4211eace579e843e648443" gracePeriod=30 Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.989396 4698 generic.go:334] "Generic (PLEG): container finished" podID="992034d3-1c4d-4e83-9641-12543dd3df24" containerID="123d4d06a0f8addc043f78310758be0fb0de464dcf972f4437ef480c85eff7a4" exitCode=0 Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.989694 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vp87x" event={"ID":"992034d3-1c4d-4e83-9641-12543dd3df24","Type":"ContainerDied","Data":"123d4d06a0f8addc043f78310758be0fb0de464dcf972f4437ef480c85eff7a4"} Jan 27 14:54:26 crc kubenswrapper[4698]: I0127 14:54:26.993940 4698 generic.go:334] "Generic (PLEG): container finished" podID="1c98c523-0dc6-487f-a279-47837df87b61" containerID="f475ddfd2dced71257c2298994783a72cc02be3fe5f424507a519767978147be" exitCode=143 Jan 27 14:54:27 crc kubenswrapper[4698]: I0127 14:54:27.007309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerDied","Data":"f475ddfd2dced71257c2298994783a72cc02be3fe5f424507a519767978147be"} Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.015805 4698 generic.go:334] "Generic (PLEG): container finished" podID="1c98c523-0dc6-487f-a279-47837df87b61" containerID="12a57213ca737ce5381887b439702260fc2192b36b4211eace579e843e648443" exitCode=0 Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.015930 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerDied","Data":"12a57213ca737ce5381887b439702260fc2192b36b4211eace579e843e648443"} Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.397724 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.398303 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-log" containerID="cri-o://4cbd31462283703c3ca2ab8011b320af50638594665ca991f17b3cc1e3f582b5" gracePeriod=30 Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.398863 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-httpd" containerID="cri-o://89350b3dedb5252f10ece7b4309f2e1e45131485a2dac9829d72c744e659f269" gracePeriod=30 Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.428462 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.576461 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.576839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.576897 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.576980 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6865d\" (UniqueName: \"kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.577038 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.577098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.577152 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.577242 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run\") pod \"1c98c523-0dc6-487f-a279-47837df87b61\" (UID: \"1c98c523-0dc6-487f-a279-47837df87b61\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.578663 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.579070 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs" (OuterVolumeSpecName: "logs") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.604176 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.629013 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts" (OuterVolumeSpecName: "scripts") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.629201 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d" (OuterVolumeSpecName: "kube-api-access-6865d") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "kube-api-access-6865d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.662963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700503 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700551 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700584 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700598 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700611 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6865d\" (UniqueName: \"kubernetes.io/projected/1c98c523-0dc6-487f-a279-47837df87b61-kube-api-access-6865d\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.700626 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c98c523-0dc6-487f-a279-47837df87b61-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.701154 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vp87x" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.744282 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.753514 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data" (OuterVolumeSpecName: "config-data") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.797688 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c98c523-0dc6-487f-a279-47837df87b61" (UID: "1c98c523-0dc6-487f-a279-47837df87b61"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.803133 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g92x6\" (UniqueName: \"kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6\") pod \"992034d3-1c4d-4e83-9641-12543dd3df24\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.803517 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle\") pod \"992034d3-1c4d-4e83-9641-12543dd3df24\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.803674 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config\") pod \"992034d3-1c4d-4e83-9641-12543dd3df24\" (UID: \"992034d3-1c4d-4e83-9641-12543dd3df24\") " Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.804241 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.804262 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.804275 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c98c523-0dc6-487f-a279-47837df87b61-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.821450 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6" (OuterVolumeSpecName: "kube-api-access-g92x6") pod "992034d3-1c4d-4e83-9641-12543dd3df24" (UID: "992034d3-1c4d-4e83-9641-12543dd3df24"). InnerVolumeSpecName "kube-api-access-g92x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.862036 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config" (OuterVolumeSpecName: "config") pod "992034d3-1c4d-4e83-9641-12543dd3df24" (UID: "992034d3-1c4d-4e83-9641-12543dd3df24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.888714 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "992034d3-1c4d-4e83-9641-12543dd3df24" (UID: "992034d3-1c4d-4e83-9641-12543dd3df24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.905726 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.905774 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g92x6\" (UniqueName: \"kubernetes.io/projected/992034d3-1c4d-4e83-9641-12543dd3df24-kube-api-access-g92x6\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:28 crc kubenswrapper[4698]: I0127 14:54:28.905791 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992034d3-1c4d-4e83-9641-12543dd3df24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.034547 4698 generic.go:334] "Generic (PLEG): container finished" podID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerID="4cbd31462283703c3ca2ab8011b320af50638594665ca991f17b3cc1e3f582b5" exitCode=143 Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.034646 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerDied","Data":"4cbd31462283703c3ca2ab8011b320af50638594665ca991f17b3cc1e3f582b5"} Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.048570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vp87x" event={"ID":"992034d3-1c4d-4e83-9641-12543dd3df24","Type":"ContainerDied","Data":"55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c"} Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.048622 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ca7234159003c7a98ff70dcb13f8d4efff74ff4ead693f0becdbea6f6bf53c" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.048711 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vp87x" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.067231 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c98c523-0dc6-487f-a279-47837df87b61","Type":"ContainerDied","Data":"0b14c322ec290ef2f535f17e195aa4767fef98eeb591ac098f93e527686d1cf8"} Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.067290 4698 scope.go:117] "RemoveContainer" containerID="12a57213ca737ce5381887b439702260fc2192b36b4211eace579e843e648443" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.067440 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.109613 4698 scope.go:117] "RemoveContainer" containerID="f475ddfd2dced71257c2298994783a72cc02be3fe5f424507a519767978147be" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.115608 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.127713 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.137382 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:29 crc kubenswrapper[4698]: E0127 14:54:29.137929 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992034d3-1c4d-4e83-9641-12543dd3df24" containerName="neutron-db-sync" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.137956 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="992034d3-1c4d-4e83-9641-12543dd3df24" containerName="neutron-db-sync" Jan 27 14:54:29 crc kubenswrapper[4698]: E0127 14:54:29.137980 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-httpd" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.137988 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-httpd" Jan 27 14:54:29 crc kubenswrapper[4698]: E0127 14:54:29.138015 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-log" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.138025 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-log" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.138239 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-log" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.138270 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="992034d3-1c4d-4e83-9641-12543dd3df24" containerName="neutron-db-sync" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.138305 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c98c523-0dc6-487f-a279-47837df87b61" containerName="glance-httpd" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.141371 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.144421 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.145038 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.172204 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318486 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsx9\" (UniqueName: \"kubernetes.io/projected/19f81344-f620-4556-a605-8b6d26805b77-kube-api-access-mwsx9\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318551 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318589 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318710 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-logs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-config-data\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-scripts\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318896 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.318958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.411720 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.414059 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwsx9\" (UniqueName: \"kubernetes.io/projected/19f81344-f620-4556-a605-8b6d26805b77-kube-api-access-mwsx9\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426540 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-logs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426777 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-config-data\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426827 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-scripts\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.426941 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.427000 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.427435 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.430350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.431870 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.432300 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19f81344-f620-4556-a605-8b6d26805b77-logs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.445317 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.451290 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-scripts\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.452975 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-config-data\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.459815 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.462064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.482234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f81344-f620-4556-a605-8b6d26805b77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.482615 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.482779 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.483530 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qqt82" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.483732 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.488538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwsx9\" (UniqueName: \"kubernetes.io/projected/19f81344-f620-4556-a605-8b6d26805b77-kube-api-access-mwsx9\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.498393 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.522550 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"19f81344-f620-4556-a605-8b6d26805b77\") " pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.531305 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.531368 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9qp\" (UniqueName: \"kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.531410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.533392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.533528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.533561 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnm5\" (UniqueName: \"kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.533621 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.534186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.534246 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.534282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.534431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636479 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636584 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9qp\" (UniqueName: \"kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636623 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636770 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltnm5\" (UniqueName: \"kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.636888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.637846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.637876 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.638473 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.638771 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.641303 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.643630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.644286 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.644918 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.649570 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.664824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9qp\" (UniqueName: \"kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp\") pod \"neutron-6cf989b698-8jzjn\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.670630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltnm5\" (UniqueName: \"kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5\") pod \"dnsmasq-dns-579477885f-87tbw\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.747102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.763829 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:54:29 crc kubenswrapper[4698]: I0127 14:54:29.834090 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:30 crc kubenswrapper[4698]: I0127 14:54:30.729012 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:54:30 crc kubenswrapper[4698]: I0127 14:54:30.763359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.046250 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c98c523-0dc6-487f-a279-47837df87b61" path="/var/lib/kubelet/pods/1c98c523-0dc6-487f-a279-47837df87b61/volumes" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.047670 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:54:31 crc kubenswrapper[4698]: W0127 14:54:31.109847 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555e2415_b86d_425b_a345_2a2e8c9ef212.slice/crio-aa11c386b4c8af38f5688d1e7c515cd38d332eacd65ae92a34487e8454dc066e WatchSource:0}: Error finding container aa11c386b4c8af38f5688d1e7c515cd38d332eacd65ae92a34487e8454dc066e: Status 404 returned error can't find the container with id aa11c386b4c8af38f5688d1e7c515cd38d332eacd65ae92a34487e8454dc066e Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.121594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579477885f-87tbw" event={"ID":"5d434aa2-b7eb-424d-930a-25be01006019","Type":"ContainerStarted","Data":"bc5439765d0c463b88edfef42904ceac0ab614fd5177e99e53e71541cfee79e1"} Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.125201 4698 generic.go:334] "Generic (PLEG): container finished" podID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerID="89350b3dedb5252f10ece7b4309f2e1e45131485a2dac9829d72c744e659f269" exitCode=0 Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.125310 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerDied","Data":"89350b3dedb5252f10ece7b4309f2e1e45131485a2dac9829d72c744e659f269"} Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.125339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"803378a1-8dbd-4540-8e07-8b9d6fc29c6b","Type":"ContainerDied","Data":"ff3a58ca7cb001f14463c65798e61aef3a2184b8258dfd78cc9e3ae37d01b336"} Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.125349 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3a58ca7cb001f14463c65798e61aef3a2184b8258dfd78cc9e3ae37d01b336" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.132949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19f81344-f620-4556-a605-8b6d26805b77","Type":"ContainerStarted","Data":"1f2be1ff3cdf56d5cf4ba02c30c3bb3372198c2fa6dbccb85cbc00df826c7332"} Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.212148 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.308269 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.308725 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.308899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.308959 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.309016 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.309053 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.309077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.309101 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvlzn\" (UniqueName: \"kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn\") pod \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\" (UID: \"803378a1-8dbd-4540-8e07-8b9d6fc29c6b\") " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.312038 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.312757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs" (OuterVolumeSpecName: "logs") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.329070 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn" (OuterVolumeSpecName: "kube-api-access-jvlzn") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "kube-api-access-jvlzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.330955 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.345081 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts" (OuterVolumeSpecName: "scripts") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.413254 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.413295 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.413323 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.413335 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.413348 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvlzn\" (UniqueName: \"kubernetes.io/projected/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-kube-api-access-jvlzn\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.438821 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.517008 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.527242 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.556970 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.572736 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data" (OuterVolumeSpecName: "config-data") pod "803378a1-8dbd-4540-8e07-8b9d6fc29c6b" (UID: "803378a1-8dbd-4540-8e07-8b9d6fc29c6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.622075 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.622330 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:31 crc kubenswrapper[4698]: I0127 14:54:31.622349 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803378a1-8dbd-4540-8e07-8b9d6fc29c6b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.153803 4698 generic.go:334] "Generic (PLEG): container finished" podID="5d434aa2-b7eb-424d-930a-25be01006019" containerID="ca6eb8fb804eb01ac9beab50ca6c50c0ea410fa35f6b0ec6786bfb8094cbd96c" exitCode=0 Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.154146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579477885f-87tbw" event={"ID":"5d434aa2-b7eb-424d-930a-25be01006019","Type":"ContainerDied","Data":"ca6eb8fb804eb01ac9beab50ca6c50c0ea410fa35f6b0ec6786bfb8094cbd96c"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.168010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerStarted","Data":"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.168064 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerStarted","Data":"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.168078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerStarted","Data":"aa11c386b4c8af38f5688d1e7c515cd38d332eacd65ae92a34487e8454dc066e"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.168434 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.174177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19f81344-f620-4556-a605-8b6d26805b77","Type":"ContainerStarted","Data":"f8da84ea3c21f91aa07f742dad696f6c3cf3846ac1aa61a96f55e104a5e987d1"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.176812 4698 generic.go:334] "Generic (PLEG): container finished" podID="ac731464-5c12-4dd8-81b2-011463c78945" containerID="d135360ba59b9db02a9b725bd9d1375367a9fdde4a506a22f41c92715a90eea9" exitCode=0 Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.177045 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.179551 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerDied","Data":"d135360ba59b9db02a9b725bd9d1375367a9fdde4a506a22f41c92715a90eea9"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.179614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac731464-5c12-4dd8-81b2-011463c78945","Type":"ContainerDied","Data":"fff044136ac7549a1425696ad631e3d081cade07bd85574ac94599fd65bc8057"} Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.179663 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff044136ac7549a1425696ad631e3d081cade07bd85574ac94599fd65bc8057" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.243769 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cf989b698-8jzjn" podStartSLOduration=3.243747769 podStartE2EDuration="3.243747769s" podCreationTimestamp="2026-01-27 14:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:32.226719701 +0000 UTC m=+1527.903497166" watchObservedRunningTime="2026-01-27 14:54:32.243747769 +0000 UTC m=+1527.920525234" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.300991 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.305783 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.311746 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.338880 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339361 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="sg-core" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339378 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="sg-core" Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339400 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-log" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339408 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-log" Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339421 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="proxy-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339429 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="proxy-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339446 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-notification-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339455 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-notification-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339474 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-central-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339481 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-central-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: E0127 14:54:32.339504 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339512 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339760 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-log" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339789 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="proxy-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339801 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-central-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339814 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" containerName="glance-httpd" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339831 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="ceilometer-notification-agent" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.339843 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac731464-5c12-4dd8-81b2-011463c78945" containerName="sg-core" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.343065 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.347854 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.347856 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.437723 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449095 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449320 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449353 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449412 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449467 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6rcw\" (UniqueName: \"kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449483 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle\") pod \"ac731464-5c12-4dd8-81b2-011463c78945\" (UID: \"ac731464-5c12-4dd8-81b2-011463c78945\") " Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449783 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhpf\" (UniqueName: \"kubernetes.io/projected/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-kube-api-access-4hhpf\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449884 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.449977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.450008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.450048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-logs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.450068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.451824 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.459781 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw" (OuterVolumeSpecName: "kube-api-access-s6rcw") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "kube-api-access-s6rcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.461906 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts" (OuterVolumeSpecName: "scripts") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.463324 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.515199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.554304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.554510 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhpf\" (UniqueName: \"kubernetes.io/projected/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-kube-api-access-4hhpf\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.554564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.554674 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.555105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.555187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.555303 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-logs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.555358 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.558835 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.559330 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.559746 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.559815 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6rcw\" (UniqueName: \"kubernetes.io/projected/ac731464-5c12-4dd8-81b2-011463c78945-kube-api-access-s6rcw\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.560032 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-logs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.560075 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.560098 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.560113 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac731464-5c12-4dd8-81b2-011463c78945-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.585426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.586109 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.586602 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.589267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.595504 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhpf\" (UniqueName: \"kubernetes.io/projected/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-kube-api-access-4hhpf\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.595606 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.622049 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.662115 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.662159 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.687719 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.822890 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data" (OuterVolumeSpecName: "config-data") pod "ac731464-5c12-4dd8-81b2-011463c78945" (UID: "ac731464-5c12-4dd8-81b2-011463c78945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.867130 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac731464-5c12-4dd8-81b2-011463c78945-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:32 crc kubenswrapper[4698]: I0127 14:54:32.967505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.022272 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803378a1-8dbd-4540-8e07-8b9d6fc29c6b" path="/var/lib/kubelet/pods/803378a1-8dbd-4540-8e07-8b9d6fc29c6b/volumes" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.206786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579477885f-87tbw" event={"ID":"5d434aa2-b7eb-424d-930a-25be01006019","Type":"ContainerStarted","Data":"750fcb73cb35140556f9fb8158b8fdf9210bf36bc6358e595baad4cc4d8a6683"} Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.207255 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.222148 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.223176 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" event={"ID":"9a268a3b-da75-4a08-a9a3-b097f2066a27","Type":"ContainerStarted","Data":"157b60d7a17461e1c42c81e8fd2f48839e2e141d834579077f1f226aff7c96da"} Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.246451 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-579477885f-87tbw" podStartSLOduration=4.246433404 podStartE2EDuration="4.246433404s" podCreationTimestamp="2026-01-27 14:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:33.238852855 +0000 UTC m=+1528.915630330" watchObservedRunningTime="2026-01-27 14:54:33.246433404 +0000 UTC m=+1528.923210869" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.290369 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" podStartSLOduration=2.728320648 podStartE2EDuration="45.290340561s" podCreationTimestamp="2026-01-27 14:53:48 +0000 UTC" firstStartedPulling="2026-01-27 14:53:49.554550657 +0000 UTC m=+1485.231328122" lastFinishedPulling="2026-01-27 14:54:32.11657058 +0000 UTC m=+1527.793348035" observedRunningTime="2026-01-27 14:54:33.260431783 +0000 UTC m=+1528.937209248" watchObservedRunningTime="2026-01-27 14:54:33.290340561 +0000 UTC m=+1528.967118026" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.319664 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.349717 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.374709 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.377851 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.385778 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.391118 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.391842 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.392130 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.438734 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57bdd5f-5p47q"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.441301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.448121 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.448956 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492108 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-public-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492224 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492313 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492340 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-httpd-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492374 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqjh\" (UniqueName: \"kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492528 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492583 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-internal-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492663 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492768 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.492996 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.493080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-combined-ca-bundle\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.493132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-ovndb-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.493185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhpq\" (UniqueName: \"kubernetes.io/projected/c64faec6-26c1-4556-bcfb-707840ac0863-kube-api-access-dkhpq\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.525089 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57bdd5f-5p47q"] Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.600703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-public-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.601930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602015 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-httpd-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602055 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqjh\" (UniqueName: \"kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602151 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-internal-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602279 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602313 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-combined-ca-bundle\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-ovndb-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.602450 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhpq\" (UniqueName: \"kubernetes.io/projected/c64faec6-26c1-4556-bcfb-707840ac0863-kube-api-access-dkhpq\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.604155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.604280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.609580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-public-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.609634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.612725 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.622223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-httpd-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.625341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-internal-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.626155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-combined-ca-bundle\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.626602 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-ovndb-tls-certs\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.627446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c64faec6-26c1-4556-bcfb-707840ac0863-config\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.629239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqjh\" (UniqueName: \"kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.631235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.635391 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.636616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhpq\" (UniqueName: \"kubernetes.io/projected/c64faec6-26c1-4556-bcfb-707840ac0863-kube-api-access-dkhpq\") pod \"neutron-57bdd5f-5p47q\" (UID: \"c64faec6-26c1-4556-bcfb-707840ac0863\") " pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.641969 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts\") pod \"ceilometer-0\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.740303 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.811756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:33 crc kubenswrapper[4698]: I0127 14:54:33.915952 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.254764 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19f81344-f620-4556-a605-8b6d26805b77","Type":"ContainerStarted","Data":"2447c9a4762c4ebd6dfdf755d83b8ee139d187b0bd75338eae791457c3ae0b87"} Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.262269 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab","Type":"ContainerStarted","Data":"51e40c1a0550c351fd57cf5821a745bb4d078bfa3c287a1b323f2f4b940a30b0"} Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.296026 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.295998145 podStartE2EDuration="5.295998145s" podCreationTimestamp="2026-01-27 14:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:34.290704535 +0000 UTC m=+1529.967482040" watchObservedRunningTime="2026-01-27 14:54:34.295998145 +0000 UTC m=+1529.972775610" Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.603189 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57bdd5f-5p47q"] Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.700738 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:34 crc kubenswrapper[4698]: I0127 14:54:34.879757 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:35 crc kubenswrapper[4698]: I0127 14:54:35.011066 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac731464-5c12-4dd8-81b2-011463c78945" path="/var/lib/kubelet/pods/ac731464-5c12-4dd8-81b2-011463c78945/volumes" Jan 27 14:54:35 crc kubenswrapper[4698]: I0127 14:54:35.303048 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerStarted","Data":"7dfae6fbb03a9fefda0f3dc725657d833b8b9e201a37f9c71168ff2a2d437228"} Jan 27 14:54:35 crc kubenswrapper[4698]: I0127 14:54:35.311783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab","Type":"ContainerStarted","Data":"bc8f92d9328f34d490d0918f01e1c6ad4aa2c5626ab6b591495c81a97e993fbc"} Jan 27 14:54:35 crc kubenswrapper[4698]: I0127 14:54:35.313742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bdd5f-5p47q" event={"ID":"c64faec6-26c1-4556-bcfb-707840ac0863","Type":"ContainerStarted","Data":"4e044f141881ac312cc25547491f2ee2a2ed38ad4bc1604871378df14fe9515c"} Jan 27 14:54:35 crc kubenswrapper[4698]: I0127 14:54:35.313790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bdd5f-5p47q" event={"ID":"c64faec6-26c1-4556-bcfb-707840ac0863","Type":"ContainerStarted","Data":"1b20ed1cf5e19d07717a46886e1b7382d12b98e91294afd36da47280bcc086e8"} Jan 27 14:54:36 crc kubenswrapper[4698]: I0127 14:54:36.328618 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerStarted","Data":"f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d"} Jan 27 14:54:36 crc kubenswrapper[4698]: I0127 14:54:36.335939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab","Type":"ContainerStarted","Data":"0950b96658d4b54e825d3c52a5a2bc14939ed5faed72a18a1e23f280ba645226"} Jan 27 14:54:36 crc kubenswrapper[4698]: I0127 14:54:36.339421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bdd5f-5p47q" event={"ID":"c64faec6-26c1-4556-bcfb-707840ac0863","Type":"ContainerStarted","Data":"e700724f658f0fc9ccc165b010ca6359e3f23c20b5408c000d64960b413c317a"} Jan 27 14:54:36 crc kubenswrapper[4698]: I0127 14:54:36.339716 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:54:36 crc kubenswrapper[4698]: I0127 14:54:36.410755 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57bdd5f-5p47q" podStartSLOduration=3.410731936 podStartE2EDuration="3.410731936s" podCreationTimestamp="2026-01-27 14:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:36.361258393 +0000 UTC m=+1532.038035858" watchObservedRunningTime="2026-01-27 14:54:36.410731936 +0000 UTC m=+1532.087509401" Jan 27 14:54:37 crc kubenswrapper[4698]: I0127 14:54:37.086702 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:37 crc kubenswrapper[4698]: I0127 14:54:37.087788 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" containerID="cri-o://d3a9953d8401c424c58552b0b4f6574236a5890e4ac37880b086ac4ad400f15a" gracePeriod=30 Jan 27 14:54:37 crc kubenswrapper[4698]: I0127 14:54:37.352724 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerStarted","Data":"5669c282c54ee801e52b3459eb39f0e5fed7fb7de9b2df44b12b7a7b2ede144f"} Jan 27 14:54:37 crc kubenswrapper[4698]: I0127 14:54:37.378721 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.378700467 podStartE2EDuration="5.378700467s" podCreationTimestamp="2026-01-27 14:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:37.377561727 +0000 UTC m=+1533.054339212" watchObservedRunningTime="2026-01-27 14:54:37.378700467 +0000 UTC m=+1533.055477942" Jan 27 14:54:38 crc kubenswrapper[4698]: I0127 14:54:38.365912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerStarted","Data":"e07f6828cd3448f7684ab42d6002538e9e91b86f00f355c82c704fc918ed9c68"} Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.379307 4698 generic.go:334] "Generic (PLEG): container finished" podID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerID="d3a9953d8401c424c58552b0b4f6574236a5890e4ac37880b086ac4ad400f15a" exitCode=0 Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.379379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerDied","Data":"d3a9953d8401c424c58552b0b4f6574236a5890e4ac37880b086ac4ad400f15a"} Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.379711 4698 scope.go:117] "RemoveContainer" containerID="e1553d2d02cb6a0668c9ce9cccabf0224f2fe83aa565ec195633e99db3313307" Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.750888 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.765717 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.766020 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.860857 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.861175 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="dnsmasq-dns" containerID="cri-o://26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2" gracePeriod=10 Jan 27 14:54:39 crc kubenswrapper[4698]: I0127 14:54:39.976181 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.019599 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.280856 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: E0127 14:54:40.339289 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a624b91_5853_4b9f_a75c_101d75550a84.slice/crio-26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.362419 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data\") pod \"4129eb47-beba-4bec-8cb2-59818e8908a5\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.362504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca\") pod \"4129eb47-beba-4bec-8cb2-59818e8908a5\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.362558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle\") pod \"4129eb47-beba-4bec-8cb2-59818e8908a5\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.362585 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dngt\" (UniqueName: \"kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt\") pod \"4129eb47-beba-4bec-8cb2-59818e8908a5\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.362792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs\") pod \"4129eb47-beba-4bec-8cb2-59818e8908a5\" (UID: \"4129eb47-beba-4bec-8cb2-59818e8908a5\") " Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.363548 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs" (OuterVolumeSpecName: "logs") pod "4129eb47-beba-4bec-8cb2-59818e8908a5" (UID: "4129eb47-beba-4bec-8cb2-59818e8908a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.395883 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt" (OuterVolumeSpecName: "kube-api-access-7dngt") pod "4129eb47-beba-4bec-8cb2-59818e8908a5" (UID: "4129eb47-beba-4bec-8cb2-59818e8908a5"). InnerVolumeSpecName "kube-api-access-7dngt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.413227 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4129eb47-beba-4bec-8cb2-59818e8908a5" (UID: "4129eb47-beba-4bec-8cb2-59818e8908a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.437187 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4129eb47-beba-4bec-8cb2-59818e8908a5","Type":"ContainerDied","Data":"febed9f710b1af49f4e1169ba30db878ee2b78ad32a40f722e4c0e0ad74c0cb0"} Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.437245 4698 scope.go:117] "RemoveContainer" containerID="d3a9953d8401c424c58552b0b4f6574236a5890e4ac37880b086ac4ad400f15a" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.437283 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.445013 4698 generic.go:334] "Generic (PLEG): container finished" podID="0a624b91-5853-4b9f-a75c-101d75550a84" containerID="26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2" exitCode=0 Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.446323 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerDied","Data":"26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2"} Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.446359 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.447286 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4129eb47-beba-4bec-8cb2-59818e8908a5" (UID: "4129eb47-beba-4bec-8cb2-59818e8908a5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.447370 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.471519 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4129eb47-beba-4bec-8cb2-59818e8908a5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.471557 4698 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.471570 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.471582 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dngt\" (UniqueName: \"kubernetes.io/projected/4129eb47-beba-4bec-8cb2-59818e8908a5-kube-api-access-7dngt\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.501821 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data" (OuterVolumeSpecName: "config-data") pod "4129eb47-beba-4bec-8cb2-59818e8908a5" (UID: "4129eb47-beba-4bec-8cb2-59818e8908a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.573736 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4129eb47-beba-4bec-8cb2-59818e8908a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.800392 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.816345 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.834542 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:40 crc kubenswrapper[4698]: E0127 14:54:40.835393 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835415 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: E0127 14:54:40.835439 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835447 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: E0127 14:54:40.835464 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835472 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: E0127 14:54:40.835492 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835500 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835749 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835769 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.835787 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.836619 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.840479 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.845166 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.945487 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.986963 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-logs\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.987146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.987177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pq8\" (UniqueName: \"kubernetes.io/projected/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-kube-api-access-46pq8\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.987235 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:40 crc kubenswrapper[4698]: I0127 14:54:40.987309 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.016188 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" path="/var/lib/kubelet/pods/4129eb47-beba-4bec-8cb2-59818e8908a5/volumes" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.088946 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089020 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089055 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089135 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x49tr\" (UniqueName: \"kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089334 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0\") pod \"0a624b91-5853-4b9f-a75c-101d75550a84\" (UID: \"0a624b91-5853-4b9f-a75c-101d75550a84\") " Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089767 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.089914 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.090004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-logs\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.090202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.090242 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46pq8\" (UniqueName: \"kubernetes.io/projected/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-kube-api-access-46pq8\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.091787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-logs\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.097758 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.099360 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.106115 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr" (OuterVolumeSpecName: "kube-api-access-x49tr") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "kube-api-access-x49tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.131045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.150259 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46pq8\" (UniqueName: \"kubernetes.io/projected/72e26469-ae9a-4fd9-b7ee-bfaaa48b4554-kube-api-access-46pq8\") pod \"watcher-decision-engine-0\" (UID: \"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.192528 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x49tr\" (UniqueName: \"kubernetes.io/projected/0a624b91-5853-4b9f-a75c-101d75550a84-kube-api-access-x49tr\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.194007 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.213276 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.213348 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.236413 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.242613 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.286425 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config" (OuterVolumeSpecName: "config") pod "0a624b91-5853-4b9f-a75c-101d75550a84" (UID: "0a624b91-5853-4b9f-a75c-101d75550a84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.295112 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.295167 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.295181 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.295194 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.295208 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a624b91-5853-4b9f-a75c-101d75550a84-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.473630 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" event={"ID":"0a624b91-5853-4b9f-a75c-101d75550a84","Type":"ContainerDied","Data":"14c0ca6312a7059964455b887e2d7c3d515751d00bf928362b946b3050d8396b"} Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.473686 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754ff55b87-tpb84" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.473707 4698 scope.go:117] "RemoveContainer" containerID="26e2181c4ff83c04eca62de46e69b59793710f2b4864b77bd2a5a275f98cc3a2" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.484247 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-central-agent" containerID="cri-o://f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d" gracePeriod=30 Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.485067 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerStarted","Data":"75a4d73aabfcc279825b9e005ea2b1a79bfe8448e44b4a069eda5f4758a78578"} Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.485139 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="proxy-httpd" containerID="cri-o://75a4d73aabfcc279825b9e005ea2b1a79bfe8448e44b4a069eda5f4758a78578" gracePeriod=30 Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.485292 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-notification-agent" containerID="cri-o://5669c282c54ee801e52b3459eb39f0e5fed7fb7de9b2df44b12b7a7b2ede144f" gracePeriod=30 Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.485344 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="sg-core" containerID="cri-o://e07f6828cd3448f7684ab42d6002538e9e91b86f00f355c82c704fc918ed9c68" gracePeriod=30 Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.486092 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.545455 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.606363015 podStartE2EDuration="8.545426118s" podCreationTimestamp="2026-01-27 14:54:33 +0000 UTC" firstStartedPulling="2026-01-27 14:54:34.735186741 +0000 UTC m=+1530.411964206" lastFinishedPulling="2026-01-27 14:54:40.674249844 +0000 UTC m=+1536.351027309" observedRunningTime="2026-01-27 14:54:41.521104356 +0000 UTC m=+1537.197881831" watchObservedRunningTime="2026-01-27 14:54:41.545426118 +0000 UTC m=+1537.222203583" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.564719 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.582756 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-754ff55b87-tpb84"] Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.600746 4698 scope.go:117] "RemoveContainer" containerID="a473e2d3968d0af868be37108af3f2df5cc377fccd23de08fd0c1ffc1c36b68d" Jan 27 14:54:41 crc kubenswrapper[4698]: I0127 14:54:41.768598 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:54:41 crc kubenswrapper[4698]: W0127 14:54:41.803130 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72e26469_ae9a_4fd9_b7ee_bfaaa48b4554.slice/crio-4fe896c9bffdc2a30bcd48a6094e738827a7c67a18cea50f2220afb986c8a8a1 WatchSource:0}: Error finding container 4fe896c9bffdc2a30bcd48a6094e738827a7c67a18cea50f2220afb986c8a8a1: Status 404 returned error can't find the container with id 4fe896c9bffdc2a30bcd48a6094e738827a7c67a18cea50f2220afb986c8a8a1 Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.497393 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerID="75a4d73aabfcc279825b9e005ea2b1a79bfe8448e44b4a069eda5f4758a78578" exitCode=0 Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.497727 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerID="e07f6828cd3448f7684ab42d6002538e9e91b86f00f355c82c704fc918ed9c68" exitCode=2 Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.497748 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerID="5669c282c54ee801e52b3459eb39f0e5fed7fb7de9b2df44b12b7a7b2ede144f" exitCode=0 Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.497456 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerDied","Data":"75a4d73aabfcc279825b9e005ea2b1a79bfe8448e44b4a069eda5f4758a78578"} Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.498017 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerDied","Data":"e07f6828cd3448f7684ab42d6002538e9e91b86f00f355c82c704fc918ed9c68"} Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.498066 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerDied","Data":"5669c282c54ee801e52b3459eb39f0e5fed7fb7de9b2df44b12b7a7b2ede144f"} Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.501629 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554","Type":"ContainerStarted","Data":"4fe896c9bffdc2a30bcd48a6094e738827a7c67a18cea50f2220afb986c8a8a1"} Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.501702 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.501722 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.968646 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:42 crc kubenswrapper[4698]: I0127 14:54:42.968708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.007036 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" path="/var/lib/kubelet/pods/0a624b91-5853-4b9f-a75c-101d75550a84/volumes" Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.010826 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.017301 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.514294 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"72e26469-ae9a-4fd9-b7ee-bfaaa48b4554","Type":"ContainerStarted","Data":"7d5f3e573493e58dda55a3eaf6a2e5f3ee9e8f30737230de208253821b6df46e"} Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.515489 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:43 crc kubenswrapper[4698]: I0127 14:54:43.515542 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:44 crc kubenswrapper[4698]: I0127 14:54:44.092433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:54:44 crc kubenswrapper[4698]: I0127 14:54:44.092935 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:54:44 crc kubenswrapper[4698]: I0127 14:54:44.095162 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:54:44 crc kubenswrapper[4698]: I0127 14:54:44.124285 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.12426358 podStartE2EDuration="4.12426358s" podCreationTimestamp="2026-01-27 14:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:54:43.541153874 +0000 UTC m=+1539.217931349" watchObservedRunningTime="2026-01-27 14:54:44.12426358 +0000 UTC m=+1539.801041045" Jan 27 14:54:45 crc kubenswrapper[4698]: I0127 14:54:45.574746 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:54:45 crc kubenswrapper[4698]: I0127 14:54:45.574780 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:54:46 crc kubenswrapper[4698]: I0127 14:54:46.206054 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:46 crc kubenswrapper[4698]: I0127 14:54:46.285373 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:54:50 crc kubenswrapper[4698]: E0127 14:54:50.596730 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a12bad2_2df8_499d_aca4_3fe31bba54df.slice/crio-f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:54:50 crc kubenswrapper[4698]: I0127 14:54:50.639546 4698 generic.go:334] "Generic (PLEG): container finished" podID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerID="f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d" exitCode=0 Jan 27 14:54:50 crc kubenswrapper[4698]: I0127 14:54:50.639611 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerDied","Data":"f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d"} Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.194850 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.250819 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.534027 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.652517 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a12bad2-2df8-499d-aca4-3fe31bba54df","Type":"ContainerDied","Data":"7dfae6fbb03a9fefda0f3dc725657d833b8b9e201a37f9c71168ff2a2d437228"} Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.652923 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.652956 4698 scope.go:117] "RemoveContainer" containerID="75a4d73aabfcc279825b9e005ea2b1a79bfe8448e44b4a069eda5f4758a78578" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.652584 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.688201 4698 scope.go:117] "RemoveContainer" containerID="e07f6828cd3448f7684ab42d6002538e9e91b86f00f355c82c704fc918ed9c68" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.689145 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.710484 4698 scope.go:117] "RemoveContainer" containerID="5669c282c54ee801e52b3459eb39f0e5fed7fb7de9b2df44b12b7a7b2ede144f" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714461 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqjh\" (UniqueName: \"kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714702 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714760 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.714852 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle\") pod \"8a12bad2-2df8-499d-aca4-3fe31bba54df\" (UID: \"8a12bad2-2df8-499d-aca4-3fe31bba54df\") " Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.716823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.718168 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.722983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh" (OuterVolumeSpecName: "kube-api-access-ncqjh") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "kube-api-access-ncqjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.724498 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts" (OuterVolumeSpecName: "scripts") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.763632 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.777931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.811724 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817866 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817894 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817903 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817913 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817921 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817929 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncqjh\" (UniqueName: \"kubernetes.io/projected/8a12bad2-2df8-499d-aca4-3fe31bba54df-kube-api-access-ncqjh\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.817936 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a12bad2-2df8-499d-aca4-3fe31bba54df-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.858178 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data" (OuterVolumeSpecName: "config-data") pod "8a12bad2-2df8-499d-aca4-3fe31bba54df" (UID: "8a12bad2-2df8-499d-aca4-3fe31bba54df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.919841 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a12bad2-2df8-499d-aca4-3fe31bba54df-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.958757 4698 scope.go:117] "RemoveContainer" containerID="f9e830e5c9360de44bc97a7421764d0d19259224cae02617098de03b0541134d" Jan 27 14:54:51 crc kubenswrapper[4698]: I0127 14:54:51.994089 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.005485 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.027617 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028282 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="proxy-httpd" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028318 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="proxy-httpd" Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028347 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-central-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028358 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-central-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028376 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="init" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028385 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="init" Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028405 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-notification-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-notification-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028426 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="sg-core" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028432 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="sg-core" Jan 27 14:54:52 crc kubenswrapper[4698]: E0127 14:54:52.028456 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="dnsmasq-dns" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028463 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="dnsmasq-dns" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028687 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-central-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028706 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="ceilometer-notification-agent" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028717 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="proxy-httpd" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028733 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a624b91-5853-4b9f-a75c-101d75550a84" containerName="dnsmasq-dns" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028746 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" containerName="sg-core" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.028753 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4129eb47-beba-4bec-8cb2-59818e8908a5" containerName="watcher-decision-engine" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.031081 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.038206 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.038483 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.038611 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.041301 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123233 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123304 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbz55\" (UniqueName: \"kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123436 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.123902 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226651 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226684 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbz55\" (UniqueName: \"kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226763 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226859 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.226873 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.227114 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.228121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.231110 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.231152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.232004 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.232833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.235800 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.245927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbz55\" (UniqueName: \"kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55\") pod \"ceilometer-0\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.406886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:54:52 crc kubenswrapper[4698]: W0127 14:54:52.906383 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc631146d_5feb_4fae_905a_56cafc1b88de.slice/crio-d4d53712092089588c4d9bd64708b1a57ef4020451d01302c4cdaf6f774e0a2e WatchSource:0}: Error finding container d4d53712092089588c4d9bd64708b1a57ef4020451d01302c4cdaf6f774e0a2e: Status 404 returned error can't find the container with id d4d53712092089588c4d9bd64708b1a57ef4020451d01302c4cdaf6f774e0a2e Jan 27 14:54:52 crc kubenswrapper[4698]: I0127 14:54:52.907491 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:54:53 crc kubenswrapper[4698]: I0127 14:54:53.005567 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a12bad2-2df8-499d-aca4-3fe31bba54df" path="/var/lib/kubelet/pods/8a12bad2-2df8-499d-aca4-3fe31bba54df/volumes" Jan 27 14:54:53 crc kubenswrapper[4698]: I0127 14:54:53.675386 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerStarted","Data":"d4d53712092089588c4d9bd64708b1a57ef4020451d01302c4cdaf6f774e0a2e"} Jan 27 14:54:54 crc kubenswrapper[4698]: I0127 14:54:54.688991 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerStarted","Data":"fa69a5bf5042c8d35723908d8fee1138abf3dbd6f0d5a35a99b63d54275ca5ba"} Jan 27 14:54:56 crc kubenswrapper[4698]: I0127 14:54:56.708567 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerStarted","Data":"1d5ada5c7f8e44a89fd796019d6c5738d39449774fbd806240065d7e577b955b"} Jan 27 14:54:57 crc kubenswrapper[4698]: I0127 14:54:57.451669 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:54:57 crc kubenswrapper[4698]: I0127 14:54:57.451737 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:54:59 crc kubenswrapper[4698]: I0127 14:54:59.840373 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6cf989b698-8jzjn" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:54:59 crc kubenswrapper[4698]: I0127 14:54:59.840571 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6cf989b698-8jzjn" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:54:59 crc kubenswrapper[4698]: I0127 14:54:59.841994 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6cf989b698-8jzjn" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:55:02 crc kubenswrapper[4698]: I0127 14:55:02.783481 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerStarted","Data":"ce1abc710ecfe8f6dfe33f022805bc933ca5c9db5c3958ea6eb6e46720835842"} Jan 27 14:55:03 crc kubenswrapper[4698]: I0127 14:55:03.827654 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-57bdd5f-5p47q" podUID="c64faec6-26c1-4556-bcfb-707840ac0863" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:55:03 crc kubenswrapper[4698]: I0127 14:55:03.832867 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-57bdd5f-5p47q" podUID="c64faec6-26c1-4556-bcfb-707840ac0863" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:55:03 crc kubenswrapper[4698]: I0127 14:55:03.832970 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-57bdd5f-5p47q" podUID="c64faec6-26c1-4556-bcfb-707840ac0863" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.133954 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.827813 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerStarted","Data":"c53082c0fe65a9c4187732127e901b6a66c4f49a88c3f3d835f3f0a1b80971ea"} Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.828013 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-central-agent" containerID="cri-o://fa69a5bf5042c8d35723908d8fee1138abf3dbd6f0d5a35a99b63d54275ca5ba" gracePeriod=30 Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.828087 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="proxy-httpd" containerID="cri-o://c53082c0fe65a9c4187732127e901b6a66c4f49a88c3f3d835f3f0a1b80971ea" gracePeriod=30 Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.828101 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="sg-core" containerID="cri-o://ce1abc710ecfe8f6dfe33f022805bc933ca5c9db5c3958ea6eb6e46720835842" gracePeriod=30 Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.828112 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-notification-agent" containerID="cri-o://1d5ada5c7f8e44a89fd796019d6c5738d39449774fbd806240065d7e577b955b" gracePeriod=30 Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.828352 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:55:06 crc kubenswrapper[4698]: I0127 14:55:06.858074 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.201535714 podStartE2EDuration="14.85805631s" podCreationTimestamp="2026-01-27 14:54:52 +0000 UTC" firstStartedPulling="2026-01-27 14:54:52.90917782 +0000 UTC m=+1548.585955285" lastFinishedPulling="2026-01-27 14:55:05.565698416 +0000 UTC m=+1561.242475881" observedRunningTime="2026-01-27 14:55:06.851448827 +0000 UTC m=+1562.528226302" watchObservedRunningTime="2026-01-27 14:55:06.85805631 +0000 UTC m=+1562.534833765" Jan 27 14:55:07 crc kubenswrapper[4698]: I0127 14:55:07.838485 4698 generic.go:334] "Generic (PLEG): container finished" podID="c631146d-5feb-4fae-905a-56cafc1b88de" containerID="ce1abc710ecfe8f6dfe33f022805bc933ca5c9db5c3958ea6eb6e46720835842" exitCode=2 Jan 27 14:55:07 crc kubenswrapper[4698]: I0127 14:55:07.838568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerDied","Data":"ce1abc710ecfe8f6dfe33f022805bc933ca5c9db5c3958ea6eb6e46720835842"} Jan 27 14:55:08 crc kubenswrapper[4698]: I0127 14:55:08.851913 4698 generic.go:334] "Generic (PLEG): container finished" podID="c631146d-5feb-4fae-905a-56cafc1b88de" containerID="1d5ada5c7f8e44a89fd796019d6c5738d39449774fbd806240065d7e577b955b" exitCode=0 Jan 27 14:55:08 crc kubenswrapper[4698]: I0127 14:55:08.851965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerDied","Data":"1d5ada5c7f8e44a89fd796019d6c5738d39449774fbd806240065d7e577b955b"} Jan 27 14:55:09 crc kubenswrapper[4698]: I0127 14:55:09.863999 4698 generic.go:334] "Generic (PLEG): container finished" podID="c631146d-5feb-4fae-905a-56cafc1b88de" containerID="fa69a5bf5042c8d35723908d8fee1138abf3dbd6f0d5a35a99b63d54275ca5ba" exitCode=0 Jan 27 14:55:09 crc kubenswrapper[4698]: I0127 14:55:09.864037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerDied","Data":"fa69a5bf5042c8d35723908d8fee1138abf3dbd6f0d5a35a99b63d54275ca5ba"} Jan 27 14:55:22 crc kubenswrapper[4698]: I0127 14:55:22.426690 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:55:27 crc kubenswrapper[4698]: I0127 14:55:27.452053 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:55:27 crc kubenswrapper[4698]: I0127 14:55:27.454158 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:55:29 crc kubenswrapper[4698]: I0127 14:55:29.842851 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:55:33 crc kubenswrapper[4698]: I0127 14:55:33.827563 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57bdd5f-5p47q" Jan 27 14:55:33 crc kubenswrapper[4698]: I0127 14:55:33.891873 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:55:33 crc kubenswrapper[4698]: I0127 14:55:33.892151 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf989b698-8jzjn" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-api" containerID="cri-o://15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499" gracePeriod=30 Jan 27 14:55:33 crc kubenswrapper[4698]: I0127 14:55:33.895702 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf989b698-8jzjn" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" containerID="cri-o://05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e" gracePeriod=30 Jan 27 14:55:34 crc kubenswrapper[4698]: I0127 14:55:34.151781 4698 generic.go:334] "Generic (PLEG): container finished" podID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerID="05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e" exitCode=0 Jan 27 14:55:34 crc kubenswrapper[4698]: I0127 14:55:34.152060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerDied","Data":"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e"} Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.179784 4698 generic.go:334] "Generic (PLEG): container finished" podID="c631146d-5feb-4fae-905a-56cafc1b88de" containerID="c53082c0fe65a9c4187732127e901b6a66c4f49a88c3f3d835f3f0a1b80971ea" exitCode=137 Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.179836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerDied","Data":"c53082c0fe65a9c4187732127e901b6a66c4f49a88c3f3d835f3f0a1b80971ea"} Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.880192 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.945690 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.945853 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.945930 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.945997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946065 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946104 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbz55\" (UniqueName: \"kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946335 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts\") pod \"c631146d-5feb-4fae-905a-56cafc1b88de\" (UID: \"c631146d-5feb-4fae-905a-56cafc1b88de\") " Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946505 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.946854 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.947195 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.952367 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55" (OuterVolumeSpecName: "kube-api-access-tbz55") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "kube-api-access-tbz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.953987 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts" (OuterVolumeSpecName: "scripts") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:37 crc kubenswrapper[4698]: I0127 14:55:37.988054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.029614 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.049962 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.050012 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c631146d-5feb-4fae-905a-56cafc1b88de-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.050025 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.050037 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.050050 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbz55\" (UniqueName: \"kubernetes.io/projected/c631146d-5feb-4fae-905a-56cafc1b88de-kube-api-access-tbz55\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.071901 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.088208 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data" (OuterVolumeSpecName: "config-data") pod "c631146d-5feb-4fae-905a-56cafc1b88de" (UID: "c631146d-5feb-4fae-905a-56cafc1b88de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.152341 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.152411 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c631146d-5feb-4fae-905a-56cafc1b88de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.206297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c631146d-5feb-4fae-905a-56cafc1b88de","Type":"ContainerDied","Data":"d4d53712092089588c4d9bd64708b1a57ef4020451d01302c4cdaf6f774e0a2e"} Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.206361 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.206370 4698 scope.go:117] "RemoveContainer" containerID="c53082c0fe65a9c4187732127e901b6a66c4f49a88c3f3d835f3f0a1b80971ea" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.244099 4698 scope.go:117] "RemoveContainer" containerID="ce1abc710ecfe8f6dfe33f022805bc933ca5c9db5c3958ea6eb6e46720835842" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.251558 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.263330 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.286119 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:38 crc kubenswrapper[4698]: E0127 14:55:38.286977 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="proxy-httpd" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287007 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="proxy-httpd" Jan 27 14:55:38 crc kubenswrapper[4698]: E0127 14:55:38.287032 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-notification-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287041 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-notification-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: E0127 14:55:38.287059 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="sg-core" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287067 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="sg-core" Jan 27 14:55:38 crc kubenswrapper[4698]: E0127 14:55:38.287089 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-central-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287096 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-central-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287340 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-notification-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287372 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="ceilometer-central-agent" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287552 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="sg-core" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.287581 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" containerName="proxy-httpd" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.290006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.296563 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.296724 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.296905 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.298927 4698 scope.go:117] "RemoveContainer" containerID="1d5ada5c7f8e44a89fd796019d6c5738d39449774fbd806240065d7e577b955b" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.332593 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.353435 4698 scope.go:117] "RemoveContainer" containerID="fa69a5bf5042c8d35723908d8fee1138abf3dbd6f0d5a35a99b63d54275ca5ba" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356038 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356095 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356171 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356407 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh775\" (UniqueName: \"kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.356505 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458509 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh775\" (UniqueName: \"kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.458734 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.460181 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.460468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.464492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.465219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.477578 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.482302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.482746 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.486316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh775\" (UniqueName: \"kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775\") pod \"ceilometer-0\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " pod="openstack/ceilometer-0" Jan 27 14:55:38 crc kubenswrapper[4698]: I0127 14:55:38.636951 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:55:39 crc kubenswrapper[4698]: I0127 14:55:39.005748 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c631146d-5feb-4fae-905a-56cafc1b88de" path="/var/lib/kubelet/pods/c631146d-5feb-4fae-905a-56cafc1b88de/volumes" Jan 27 14:55:39 crc kubenswrapper[4698]: I0127 14:55:39.160662 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:55:39 crc kubenswrapper[4698]: W0127 14:55:39.171131 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod505ff24d_a299_4255_8d5a_9b52ff443b07.slice/crio-6af873d864b983b2c93f61ec532fba1a4dbaaf6a088f8b5b826d0258f12999ae WatchSource:0}: Error finding container 6af873d864b983b2c93f61ec532fba1a4dbaaf6a088f8b5b826d0258f12999ae: Status 404 returned error can't find the container with id 6af873d864b983b2c93f61ec532fba1a4dbaaf6a088f8b5b826d0258f12999ae Jan 27 14:55:39 crc kubenswrapper[4698]: I0127 14:55:39.216717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerStarted","Data":"6af873d864b983b2c93f61ec532fba1a4dbaaf6a088f8b5b826d0258f12999ae"} Jan 27 14:55:40 crc kubenswrapper[4698]: I0127 14:55:40.229391 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerStarted","Data":"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda"} Jan 27 14:55:40 crc kubenswrapper[4698]: I0127 14:55:40.229747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerStarted","Data":"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd"} Jan 27 14:55:41 crc kubenswrapper[4698]: I0127 14:55:41.240237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerStarted","Data":"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa"} Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.266947 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerStarted","Data":"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801"} Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.267570 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.294252 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6103543660000001 podStartE2EDuration="5.29423453s" podCreationTimestamp="2026-01-27 14:55:38 +0000 UTC" firstStartedPulling="2026-01-27 14:55:39.174091547 +0000 UTC m=+1594.850869012" lastFinishedPulling="2026-01-27 14:55:42.857971711 +0000 UTC m=+1598.534749176" observedRunningTime="2026-01-27 14:55:43.291100327 +0000 UTC m=+1598.967877802" watchObservedRunningTime="2026-01-27 14:55:43.29423453 +0000 UTC m=+1598.971011995" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.795030 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.875681 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config\") pod \"555e2415-b86d-425b-a345-2a2e8c9ef212\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.875790 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle\") pod \"555e2415-b86d-425b-a345-2a2e8c9ef212\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.875922 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs\") pod \"555e2415-b86d-425b-a345-2a2e8c9ef212\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.875978 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config\") pod \"555e2415-b86d-425b-a345-2a2e8c9ef212\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.876129 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh9qp\" (UniqueName: \"kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp\") pod \"555e2415-b86d-425b-a345-2a2e8c9ef212\" (UID: \"555e2415-b86d-425b-a345-2a2e8c9ef212\") " Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.883047 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "555e2415-b86d-425b-a345-2a2e8c9ef212" (UID: "555e2415-b86d-425b-a345-2a2e8c9ef212"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.884695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp" (OuterVolumeSpecName: "kube-api-access-xh9qp") pod "555e2415-b86d-425b-a345-2a2e8c9ef212" (UID: "555e2415-b86d-425b-a345-2a2e8c9ef212"). InnerVolumeSpecName "kube-api-access-xh9qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.950230 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555e2415-b86d-425b-a345-2a2e8c9ef212" (UID: "555e2415-b86d-425b-a345-2a2e8c9ef212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.954589 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config" (OuterVolumeSpecName: "config") pod "555e2415-b86d-425b-a345-2a2e8c9ef212" (UID: "555e2415-b86d-425b-a345-2a2e8c9ef212"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.972857 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "555e2415-b86d-425b-a345-2a2e8c9ef212" (UID: "555e2415-b86d-425b-a345-2a2e8c9ef212"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.978408 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.978448 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.978462 4698 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.978472 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/555e2415-b86d-425b-a345-2a2e8c9ef212-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:43 crc kubenswrapper[4698]: I0127 14:55:43.978482 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh9qp\" (UniqueName: \"kubernetes.io/projected/555e2415-b86d-425b-a345-2a2e8c9ef212-kube-api-access-xh9qp\") on node \"crc\" DevicePath \"\"" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.280884 4698 generic.go:334] "Generic (PLEG): container finished" podID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerID="15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499" exitCode=0 Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.281177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerDied","Data":"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499"} Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.281234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf989b698-8jzjn" event={"ID":"555e2415-b86d-425b-a345-2a2e8c9ef212","Type":"ContainerDied","Data":"aa11c386b4c8af38f5688d1e7c515cd38d332eacd65ae92a34487e8454dc066e"} Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.281256 4698 scope.go:117] "RemoveContainer" containerID="05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.282462 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf989b698-8jzjn" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.305459 4698 scope.go:117] "RemoveContainer" containerID="15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.327206 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.337005 4698 scope.go:117] "RemoveContainer" containerID="05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e" Jan 27 14:55:44 crc kubenswrapper[4698]: E0127 14:55:44.337613 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e\": container with ID starting with 05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e not found: ID does not exist" containerID="05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.337742 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e"} err="failed to get container status \"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e\": rpc error: code = NotFound desc = could not find container \"05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e\": container with ID starting with 05e9d9477f958662385830ec9572998a8aaa1c565f451afd2377ba72b6a49f9e not found: ID does not exist" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.337953 4698 scope.go:117] "RemoveContainer" containerID="15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499" Jan 27 14:55:44 crc kubenswrapper[4698]: E0127 14:55:44.338376 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499\": container with ID starting with 15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499 not found: ID does not exist" containerID="15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.338406 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499"} err="failed to get container status \"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499\": rpc error: code = NotFound desc = could not find container \"15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499\": container with ID starting with 15495173a7c1edcc88a6d5dd16282d769282e8450ab0c58c553a6c6c0375c499 not found: ID does not exist" Jan 27 14:55:44 crc kubenswrapper[4698]: I0127 14:55:44.338959 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cf989b698-8jzjn"] Jan 27 14:55:45 crc kubenswrapper[4698]: I0127 14:55:45.006695 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" path="/var/lib/kubelet/pods/555e2415-b86d-425b-a345-2a2e8c9ef212/volumes" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.613684 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:55:55 crc kubenswrapper[4698]: E0127 14:55:55.614857 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.614875 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" Jan 27 14:55:55 crc kubenswrapper[4698]: E0127 14:55:55.614901 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-api" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.614907 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-api" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.615123 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-api" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.615144 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e2415-b86d-425b-a345-2a2e8c9ef212" containerName="neutron-httpd" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.616602 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.640711 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.769105 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.769261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb562\" (UniqueName: \"kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.769419 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.871055 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb562\" (UniqueName: \"kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.871196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.871224 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.871729 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.872019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.897704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb562\" (UniqueName: \"kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562\") pod \"certified-operators-7zvwn\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:55 crc kubenswrapper[4698]: I0127 14:55:55.939893 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:55:56 crc kubenswrapper[4698]: I0127 14:55:56.459157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.418600 4698 generic.go:334] "Generic (PLEG): container finished" podID="8369306e-5bbe-4677-b605-977728860668" containerID="48f36a0e585de910fd7d70e01cef71bc061b13e451d14091212f83fe12c8ec76" exitCode=0 Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.418695 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerDied","Data":"48f36a0e585de910fd7d70e01cef71bc061b13e451d14091212f83fe12c8ec76"} Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.418885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerStarted","Data":"908397e8d5fcb50574fd82bb5c3e5e984e0195e9106e3977330a3ebab6fe3c3c"} Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.452199 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.452490 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.452539 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.453409 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:55:57 crc kubenswrapper[4698]: I0127 14:55:57.453469 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" gracePeriod=600 Jan 27 14:55:57 crc kubenswrapper[4698]: E0127 14:55:57.601089 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.431433 4698 generic.go:334] "Generic (PLEG): container finished" podID="9a268a3b-da75-4a08-a9a3-b097f2066a27" containerID="157b60d7a17461e1c42c81e8fd2f48839e2e141d834579077f1f226aff7c96da" exitCode=0 Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.431533 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" event={"ID":"9a268a3b-da75-4a08-a9a3-b097f2066a27","Type":"ContainerDied","Data":"157b60d7a17461e1c42c81e8fd2f48839e2e141d834579077f1f226aff7c96da"} Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.435751 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" exitCode=0 Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.435913 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16"} Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.436020 4698 scope.go:117] "RemoveContainer" containerID="00b91e8534deca64edb3a0ddf67d35e5d274bc19ba7571ee5f99b20522a916c8" Jan 27 14:55:58 crc kubenswrapper[4698]: I0127 14:55:58.437384 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:55:58 crc kubenswrapper[4698]: E0127 14:55:58.438387 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.452543 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerStarted","Data":"830cdbcd537f41afcbbd7052727a7601594ada148c838ec7ad7f81f189979ac8"} Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.823730 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.982968 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpffl\" (UniqueName: \"kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl\") pod \"9a268a3b-da75-4a08-a9a3-b097f2066a27\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.983034 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data\") pod \"9a268a3b-da75-4a08-a9a3-b097f2066a27\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.983072 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle\") pod \"9a268a3b-da75-4a08-a9a3-b097f2066a27\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.983297 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts\") pod \"9a268a3b-da75-4a08-a9a3-b097f2066a27\" (UID: \"9a268a3b-da75-4a08-a9a3-b097f2066a27\") " Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.989405 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl" (OuterVolumeSpecName: "kube-api-access-wpffl") pod "9a268a3b-da75-4a08-a9a3-b097f2066a27" (UID: "9a268a3b-da75-4a08-a9a3-b097f2066a27"). InnerVolumeSpecName "kube-api-access-wpffl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:55:59 crc kubenswrapper[4698]: I0127 14:55:59.989905 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts" (OuterVolumeSpecName: "scripts") pod "9a268a3b-da75-4a08-a9a3-b097f2066a27" (UID: "9a268a3b-da75-4a08-a9a3-b097f2066a27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.016517 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data" (OuterVolumeSpecName: "config-data") pod "9a268a3b-da75-4a08-a9a3-b097f2066a27" (UID: "9a268a3b-da75-4a08-a9a3-b097f2066a27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.018531 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a268a3b-da75-4a08-a9a3-b097f2066a27" (UID: "9a268a3b-da75-4a08-a9a3-b097f2066a27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.085499 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.085553 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpffl\" (UniqueName: \"kubernetes.io/projected/9a268a3b-da75-4a08-a9a3-b097f2066a27-kube-api-access-wpffl\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.085565 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.085576 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a268a3b-da75-4a08-a9a3-b097f2066a27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.475833 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" event={"ID":"9a268a3b-da75-4a08-a9a3-b097f2066a27","Type":"ContainerDied","Data":"506084962b08a4bae14c524f6ed39129e2d9a7134a5a6e1a6dba43a63eced799"} Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.476605 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="506084962b08a4bae14c524f6ed39129e2d9a7134a5a6e1a6dba43a63eced799" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.475856 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x8m8s" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.566741 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:56:00 crc kubenswrapper[4698]: E0127 14:56:00.567317 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" containerName="nova-cell0-conductor-db-sync" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.567348 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" containerName="nova-cell0-conductor-db-sync" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.567589 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" containerName="nova-cell0-conductor-db-sync" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.568434 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.570955 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.571115 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8hxxg" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.580008 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.697482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.697966 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.698041 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7svt\" (UniqueName: \"kubernetes.io/projected/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-kube-api-access-n7svt\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.800148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.800288 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.800353 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7svt\" (UniqueName: \"kubernetes.io/projected/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-kube-api-access-n7svt\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.806915 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.814985 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.819728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7svt\" (UniqueName: \"kubernetes.io/projected/f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7-kube-api-access-n7svt\") pod \"nova-cell0-conductor-0\" (UID: \"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:00 crc kubenswrapper[4698]: I0127 14:56:00.899671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:01 crc kubenswrapper[4698]: I0127 14:56:01.392703 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:56:01 crc kubenswrapper[4698]: I0127 14:56:01.487597 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7","Type":"ContainerStarted","Data":"6b16c2018c62d1afe29bef9ff41ed848879e8e6fba28578c3bbc920b83d5e450"} Jan 27 14:56:02 crc kubenswrapper[4698]: I0127 14:56:02.502134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7","Type":"ContainerStarted","Data":"392802a773507775a01fc352c382dd6608ac9c0e4cf5217cc113b85ac67f2e1b"} Jan 27 14:56:02 crc kubenswrapper[4698]: I0127 14:56:02.502685 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:02 crc kubenswrapper[4698]: I0127 14:56:02.524515 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.524495416 podStartE2EDuration="2.524495416s" podCreationTimestamp="2026-01-27 14:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:02.519706059 +0000 UTC m=+1618.196483544" watchObservedRunningTime="2026-01-27 14:56:02.524495416 +0000 UTC m=+1618.201272881" Jan 27 14:56:05 crc kubenswrapper[4698]: I0127 14:56:05.529952 4698 generic.go:334] "Generic (PLEG): container finished" podID="8369306e-5bbe-4677-b605-977728860668" containerID="830cdbcd537f41afcbbd7052727a7601594ada148c838ec7ad7f81f189979ac8" exitCode=0 Jan 27 14:56:05 crc kubenswrapper[4698]: I0127 14:56:05.530022 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerDied","Data":"830cdbcd537f41afcbbd7052727a7601594ada148c838ec7ad7f81f189979ac8"} Jan 27 14:56:08 crc kubenswrapper[4698]: I0127 14:56:08.560332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerStarted","Data":"705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095"} Jan 27 14:56:08 crc kubenswrapper[4698]: I0127 14:56:08.590548 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zvwn" podStartSLOduration=3.562836865 podStartE2EDuration="13.590523623s" podCreationTimestamp="2026-01-27 14:55:55 +0000 UTC" firstStartedPulling="2026-01-27 14:55:57.42155252 +0000 UTC m=+1613.098329995" lastFinishedPulling="2026-01-27 14:56:07.449239288 +0000 UTC m=+1623.126016753" observedRunningTime="2026-01-27 14:56:08.584369071 +0000 UTC m=+1624.261146536" watchObservedRunningTime="2026-01-27 14:56:08.590523623 +0000 UTC m=+1624.267301088" Jan 27 14:56:08 crc kubenswrapper[4698]: I0127 14:56:08.698276 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:56:08 crc kubenswrapper[4698]: I0127 14:56:08.995812 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:56:08 crc kubenswrapper[4698]: E0127 14:56:08.996117 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.006131 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.008522 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.018984 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.072535 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.072606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjq5\" (UniqueName: \"kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.072800 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.174396 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjq5\" (UniqueName: \"kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.174556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.174741 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.175326 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.175366 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.208439 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjq5\" (UniqueName: \"kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5\") pod \"redhat-marketplace-xls4d\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.354840 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:09 crc kubenswrapper[4698]: I0127 14:56:09.912250 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:09 crc kubenswrapper[4698]: W0127 14:56:09.921432 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187d8b4f_4757_45bf_a23b_b35702a18f93.slice/crio-2409a1ce56112c05e2e8f2143f4182ae7526da6dc448b617c496b3fae69e6bf6 WatchSource:0}: Error finding container 2409a1ce56112c05e2e8f2143f4182ae7526da6dc448b617c496b3fae69e6bf6: Status 404 returned error can't find the container with id 2409a1ce56112c05e2e8f2143f4182ae7526da6dc448b617c496b3fae69e6bf6 Jan 27 14:56:10 crc kubenswrapper[4698]: I0127 14:56:10.583228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerStarted","Data":"2409a1ce56112c05e2e8f2143f4182ae7526da6dc448b617c496b3fae69e6bf6"} Jan 27 14:56:10 crc kubenswrapper[4698]: I0127 14:56:10.931864 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 14:56:11 crc kubenswrapper[4698]: I0127 14:56:11.595277 4698 generic.go:334] "Generic (PLEG): container finished" podID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerID="f289095516f359591c44042298a3b24872517b85c1864d6ad70aeb7ed6b11444" exitCode=0 Jan 27 14:56:11 crc kubenswrapper[4698]: I0127 14:56:11.595394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerDied","Data":"f289095516f359591c44042298a3b24872517b85c1864d6ad70aeb7ed6b11444"} Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.177323 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vgm7r"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.178942 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.181496 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.181760 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.193386 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgm7r"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.237007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdh5h\" (UniqueName: \"kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.237362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.237389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.237454 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.339273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.339394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdh5h\" (UniqueName: \"kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.339517 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.339551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.350704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.356545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.365048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.366417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdh5h\" (UniqueName: \"kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h\") pod \"nova-cell0-cell-mapping-vgm7r\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.548393 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.559793 4698 scope.go:117] "RemoveContainer" containerID="89cea3b56d5b01b1837afccfbd5ce8d8695a14935aa466553ed2e273482adac3" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.602010 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.603555 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.611879 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.646080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zdv\" (UniqueName: \"kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.646171 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.646231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.659955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.702083 4698 scope.go:117] "RemoveContainer" containerID="7e9758bb21455290a6b98f870e0f067c6ce3ef0bbfee7760cd9266179366446f" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.748800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zdv\" (UniqueName: \"kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.748845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.748908 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.766609 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.773493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.834133 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zdv\" (UniqueName: \"kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv\") pod \"nova-scheduler-0\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.852480 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.854444 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.861100 4698 scope.go:117] "RemoveContainer" containerID="47d9c3d9bdeab2ba5b42e4b677022bb02a931cf2718ac80f3cfaa2fccf5c292b" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.879614 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.927357 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.929417 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.940085 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87m6c\" (UniqueName: \"kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956098 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956236 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7qn\" (UniqueName: \"kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.956302 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.966747 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:12 crc kubenswrapper[4698]: I0127 14:56:12.984733 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059039 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059189 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059255 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7qn\" (UniqueName: \"kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059413 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.059489 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87m6c\" (UniqueName: \"kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.113278 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.482259 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.484511 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.484447 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.498538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.498541 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.498858 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.498955 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.500680 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.509136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.511223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87m6c\" (UniqueName: \"kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c\") pod \"nova-metadata-0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.513383 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7qn\" (UniqueName: \"kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.527006 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.527135 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.535467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.547691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data\") pod \"nova-api-0\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.561363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8j7\" (UniqueName: \"kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.561606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.572565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.650076 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgm7r"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.676845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.676942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.676992 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8j7\" (UniqueName: \"kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677321 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677370 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.677399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbgc5\" (UniqueName: \"kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.683478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.686563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.701450 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8j7\" (UniqueName: \"kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.769255 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779178 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779342 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779386 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.779411 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbgc5\" (UniqueName: \"kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.780530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.781090 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.781888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.782896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.783220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.794348 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.798444 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.802572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbgc5\" (UniqueName: \"kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5\") pod \"dnsmasq-dns-86575cfcc5-vsn49\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.863085 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85rx4"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.864811 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.870827 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.870898 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.879911 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85rx4"] Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.987979 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.988086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqd4\" (UniqueName: \"kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.988136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:13 crc kubenswrapper[4698]: I0127 14:56:13.988183 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.078579 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.090906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.091032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqd4\" (UniqueName: \"kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.091081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.091129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.099751 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.101166 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.121486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.131344 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqd4\" (UniqueName: \"kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4\") pod \"nova-cell1-conductor-db-sync-85rx4\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.198183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.293948 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.469933 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.629995 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:14 crc kubenswrapper[4698]: W0127 14:56:14.642655 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65866040_440d_4bca_91f4_944cfce917cb.slice/crio-77af34cdd5f441b4ab929c53e6a46ed76206cf99a497d1ed8b99be2aef16d234 WatchSource:0}: Error finding container 77af34cdd5f441b4ab929c53e6a46ed76206cf99a497d1ed8b99be2aef16d234: Status 404 returned error can't find the container with id 77af34cdd5f441b4ab929c53e6a46ed76206cf99a497d1ed8b99be2aef16d234 Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.724690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2","Type":"ContainerStarted","Data":"c980d4bb1f26cd01816e78bb0c4aa985519248359daa636536e25bd9ef508885"} Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.725846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerStarted","Data":"77af34cdd5f441b4ab929c53e6a46ed76206cf99a497d1ed8b99be2aef16d234"} Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.727384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgm7r" event={"ID":"c865f75e-a196-4b4c-ba96-383654e3c295","Type":"ContainerStarted","Data":"746bd03d270eda154fe1289f278408ec545c697eed0de8744ca89b34b7eee904"} Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.727409 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgm7r" event={"ID":"c865f75e-a196-4b4c-ba96-383654e3c295","Type":"ContainerStarted","Data":"3fde055dd80b76215893fbe688ed0dbe7e8975a9c28c0b7a1b5f8fc965e7f7d5"} Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.729907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f246979f-482d-4a82-877a-a813b522f3dd","Type":"ContainerStarted","Data":"337b57ac69112a3564f45e90f4836e1d66d77ea07a66629c237f55ed09d53ade"} Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.763828 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vgm7r" podStartSLOduration=2.763804575 podStartE2EDuration="2.763804575s" podCreationTimestamp="2026-01-27 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:14.75295075 +0000 UTC m=+1630.429728225" watchObservedRunningTime="2026-01-27 14:56:14.763804575 +0000 UTC m=+1630.440582030" Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.788954 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.801957 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:14 crc kubenswrapper[4698]: I0127 14:56:14.933037 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85rx4"] Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.758826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0","Type":"ContainerStarted","Data":"cb23e5ab69bf6eeb828a482fadd4460c0dcc54717d2d4c2733de37f82c5e164d"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.764899 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerStarted","Data":"13c3fd2a6740f6155f3816ba82eb087ee30b7f0de6bc34a85ce91b282340c826"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.772500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85rx4" event={"ID":"24589df3-de69-4037-a263-2c08e46fc8ce","Type":"ContainerStarted","Data":"b279f31de2d88d810b9d3b00bccd2c9b249ab8c4f36e1205b3db42a12dec02ee"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.772572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85rx4" event={"ID":"24589df3-de69-4037-a263-2c08e46fc8ce","Type":"ContainerStarted","Data":"c87d428bb815d9ac1130a7b63ca4955332e450ee04ba419788ccbb4bb2c2c1ce"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.775144 4698 generic.go:334] "Generic (PLEG): container finished" podID="adbb58db-7258-48ea-8409-384677c7c42e" containerID="46a275f11f1cbff8c0f7c45763aa01c1d9ace6a39108bbebfecd49efd89e1763" exitCode=0 Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.775532 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" event={"ID":"adbb58db-7258-48ea-8409-384677c7c42e","Type":"ContainerDied","Data":"46a275f11f1cbff8c0f7c45763aa01c1d9ace6a39108bbebfecd49efd89e1763"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.775572 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" event={"ID":"adbb58db-7258-48ea-8409-384677c7c42e","Type":"ContainerStarted","Data":"a568372404f9d6390d39b25e14b0b97d824a39e4c9dd1c9e63f295860de22397"} Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.834296 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-85rx4" podStartSLOduration=2.8342622459999998 podStartE2EDuration="2.834262246s" podCreationTimestamp="2026-01-27 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:15.824212212 +0000 UTC m=+1631.500989687" watchObservedRunningTime="2026-01-27 14:56:15.834262246 +0000 UTC m=+1631.511039711" Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.940006 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:15 crc kubenswrapper[4698]: I0127 14:56:15.940129 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:16 crc kubenswrapper[4698]: I0127 14:56:16.001150 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:16 crc kubenswrapper[4698]: I0127 14:56:16.576012 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:16 crc kubenswrapper[4698]: I0127 14:56:16.595821 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:16 crc kubenswrapper[4698]: I0127 14:56:16.841342 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:17 crc kubenswrapper[4698]: I0127 14:56:17.018135 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:56:17 crc kubenswrapper[4698]: I0127 14:56:17.807888 4698 generic.go:334] "Generic (PLEG): container finished" podID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerID="13c3fd2a6740f6155f3816ba82eb087ee30b7f0de6bc34a85ce91b282340c826" exitCode=0 Jan 27 14:56:17 crc kubenswrapper[4698]: I0127 14:56:17.808401 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerDied","Data":"13c3fd2a6740f6155f3816ba82eb087ee30b7f0de6bc34a85ce91b282340c826"} Jan 27 14:56:18 crc kubenswrapper[4698]: I0127 14:56:18.822522 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" event={"ID":"adbb58db-7258-48ea-8409-384677c7c42e","Type":"ContainerStarted","Data":"86f8e7fd22bc0a0612a07b967de90fd27a6e9a41d24abe9560bffc163595836f"} Jan 27 14:56:18 crc kubenswrapper[4698]: I0127 14:56:18.822723 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zvwn" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="registry-server" containerID="cri-o://705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" gracePeriod=2 Jan 27 14:56:19 crc kubenswrapper[4698]: I0127 14:56:19.833002 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:19 crc kubenswrapper[4698]: I0127 14:56:19.861547 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" podStartSLOduration=6.861520382 podStartE2EDuration="6.861520382s" podCreationTimestamp="2026-01-27 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:19.852098784 +0000 UTC m=+1635.528876249" watchObservedRunningTime="2026-01-27 14:56:19.861520382 +0000 UTC m=+1635.538297847" Jan 27 14:56:20 crc kubenswrapper[4698]: I0127 14:56:20.845594 4698 generic.go:334] "Generic (PLEG): container finished" podID="8369306e-5bbe-4677-b605-977728860668" containerID="705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" exitCode=0 Jan 27 14:56:20 crc kubenswrapper[4698]: I0127 14:56:20.845683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerDied","Data":"705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095"} Jan 27 14:56:20 crc kubenswrapper[4698]: I0127 14:56:20.992730 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:56:20 crc kubenswrapper[4698]: E0127 14:56:20.993084 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.082056 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.156379 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.156629 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-579477885f-87tbw" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="dnsmasq-dns" containerID="cri-o://750fcb73cb35140556f9fb8158b8fdf9210bf36bc6358e595baad4cc4d8a6683" gracePeriod=10 Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.748617 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-579477885f-87tbw" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.201:5353: connect: connection refused" Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.885964 4698 generic.go:334] "Generic (PLEG): container finished" podID="5d434aa2-b7eb-424d-930a-25be01006019" containerID="750fcb73cb35140556f9fb8158b8fdf9210bf36bc6358e595baad4cc4d8a6683" exitCode=0 Jan 27 14:56:24 crc kubenswrapper[4698]: I0127 14:56:24.886014 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579477885f-87tbw" event={"ID":"5d434aa2-b7eb-424d-930a-25be01006019","Type":"ContainerDied","Data":"750fcb73cb35140556f9fb8158b8fdf9210bf36bc6358e595baad4cc4d8a6683"} Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.530073 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-novncproxy:watcher_latest" Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.530134 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-novncproxy:watcher_latest" Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.530287 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell1-novncproxy-novncproxy,Image:38.102.83.111:5001/podified-master-centos10/openstack-nova-novncproxy:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d5h596h68dh649h57hcfhbh64ch5f4hdbhd9h68dh668h559h687h8bh58ch655h86h559hfdh5d6hf5h5cdh656h9fhd6h5dbh577h5f8hc4hdbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-novncproxy-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn8j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/vnc_lite.html,Port:{0 6080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell1-novncproxy-0_openstack(f246979f-482d-4a82-877a-a813b522f3dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.531464 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell1-novncproxy-novncproxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell1-novncproxy-0" podUID="f246979f-482d-4a82-877a-a813b522f3dd" Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.942259 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095 is running failed: container process not found" containerID="705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.942974 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095 is running failed: container process not found" containerID="705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.943577 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095 is running failed: container process not found" containerID="705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:56:25 crc kubenswrapper[4698]: E0127 14:56:25.943677 4698 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7zvwn" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="registry-server" Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.810041 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.910993 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f246979f-482d-4a82-877a-a813b522f3dd","Type":"ContainerDied","Data":"337b57ac69112a3564f45e90f4836e1d66d77ea07a66629c237f55ed09d53ade"} Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.911088 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.912392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn8j7\" (UniqueName: \"kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7\") pod \"f246979f-482d-4a82-877a-a813b522f3dd\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.912563 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data\") pod \"f246979f-482d-4a82-877a-a813b522f3dd\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.912976 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle\") pod \"f246979f-482d-4a82-877a-a813b522f3dd\" (UID: \"f246979f-482d-4a82-877a-a813b522f3dd\") " Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.921816 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7" (OuterVolumeSpecName: "kube-api-access-wn8j7") pod "f246979f-482d-4a82-877a-a813b522f3dd" (UID: "f246979f-482d-4a82-877a-a813b522f3dd"). InnerVolumeSpecName "kube-api-access-wn8j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.922928 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data" (OuterVolumeSpecName: "config-data") pod "f246979f-482d-4a82-877a-a813b522f3dd" (UID: "f246979f-482d-4a82-877a-a813b522f3dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:26 crc kubenswrapper[4698]: I0127 14:56:26.926907 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f246979f-482d-4a82-877a-a813b522f3dd" (UID: "f246979f-482d-4a82-877a-a813b522f3dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.016037 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn8j7\" (UniqueName: \"kubernetes.io/projected/f246979f-482d-4a82-877a-a813b522f3dd-kube-api-access-wn8j7\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.016339 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.016355 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246979f-482d-4a82-877a-a813b522f3dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.274914 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.287560 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.313367 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.315091 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.318558 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.319749 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.325248 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.342842 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.427318 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.427459 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.427510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.427660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zw2\" (UniqueName: \"kubernetes.io/projected/5b474dc0-6d40-42e8-9821-da0aa930095e-kube-api-access-p6zw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.427727 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.530934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.531171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6zw2\" (UniqueName: \"kubernetes.io/projected/5b474dc0-6d40-42e8-9821-da0aa930095e-kube-api-access-p6zw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.531262 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.531297 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.531448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.541437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.559812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.574999 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.580132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b474dc0-6d40-42e8-9821-da0aa930095e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.582349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6zw2\" (UniqueName: \"kubernetes.io/projected/5b474dc0-6d40-42e8-9821-da0aa930095e-kube-api-access-p6zw2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b474dc0-6d40-42e8-9821-da0aa930095e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:27 crc kubenswrapper[4698]: I0127 14:56:27.636593 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.044887 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-api:watcher_latest" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.044960 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-api:watcher_latest" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.045127 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-metadata-log,Image:38.102.83.111:5001/podified-master-centos10/openstack-nova-api:watcher_latest,Command:[/usr/bin/dumb-init],Args:[--single-child -- /bin/sh -c /usr/bin/tail -n+1 -F /var/log/nova/nova-metadata.log 2>/dev/null],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646hd6h5dch657h89h659h5f8h5d9h5d6h55fh59bh5b4h695h66bhfdh5d7h59fh564h55bh68dh558h5ch556hcch586h8fh588hdh545h659hbbh96q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/nova,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87m6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-metadata-0_openstack(bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.048843 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"nova-metadata-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"nova-metadata-metadata\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-nova-api:watcher_latest\\\"\"]" pod="openstack/nova-metadata-0" podUID="bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.079586 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-scheduler:watcher_latest" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.079695 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.111:5001/podified-master-centos10/openstack-nova-scheduler:watcher_latest" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.079889 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-scheduler-scheduler,Image:38.102.83.111:5001/podified-master-centos10/openstack-nova-scheduler:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fh95h59ch657h5bfh59fh669h5d6h554h549h598h5d6h5b8h56ch66h65hch66bh9bh54h5ffh59dh5dhbh5f7h8h554hfdhcbh676h668h58cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-scheduler-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4zdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST nova-scheduler],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST nova-scheduler],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST nova-scheduler],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-scheduler-0_openstack(d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.081145 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-scheduler-scheduler\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-scheduler-0" podUID="d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.507605 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.548726 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654298 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb562\" (UniqueName: \"kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562\") pod \"8369306e-5bbe-4677-b605-977728860668\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltnm5\" (UniqueName: \"kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654487 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654605 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities\") pod \"8369306e-5bbe-4677-b605-977728860668\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654715 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.654876 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content\") pod \"8369306e-5bbe-4677-b605-977728860668\" (UID: \"8369306e-5bbe-4677-b605-977728860668\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.655099 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb\") pod \"5d434aa2-b7eb-424d-930a-25be01006019\" (UID: \"5d434aa2-b7eb-424d-930a-25be01006019\") " Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.664051 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities" (OuterVolumeSpecName: "utilities") pod "8369306e-5bbe-4677-b605-977728860668" (UID: "8369306e-5bbe-4677-b605-977728860668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.669174 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5" (OuterVolumeSpecName: "kube-api-access-ltnm5") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "kube-api-access-ltnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.669924 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562" (OuterVolumeSpecName: "kube-api-access-hb562") pod "8369306e-5bbe-4677-b605-977728860668" (UID: "8369306e-5bbe-4677-b605-977728860668"). InnerVolumeSpecName "kube-api-access-hb562". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.712008 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8369306e-5bbe-4677-b605-977728860668" (UID: "8369306e-5bbe-4677-b605-977728860668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.737026 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.744395 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.751303 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758094 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltnm5\" (UniqueName: \"kubernetes.io/projected/5d434aa2-b7eb-424d-930a-25be01006019-kube-api-access-ltnm5\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758146 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758159 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758171 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758182 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8369306e-5bbe-4677-b605-977728860668-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758194 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.758205 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb562\" (UniqueName: \"kubernetes.io/projected/8369306e-5bbe-4677-b605-977728860668-kube-api-access-hb562\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.762943 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config" (OuterVolumeSpecName: "config") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.773795 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d434aa2-b7eb-424d-930a-25be01006019" (UID: "5d434aa2-b7eb-424d-930a-25be01006019"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.778377 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.860570 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.860617 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d434aa2-b7eb-424d-930a-25be01006019-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.943528 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zvwn" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.943769 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zvwn" event={"ID":"8369306e-5bbe-4677-b605-977728860668","Type":"ContainerDied","Data":"908397e8d5fcb50574fd82bb5c3e5e984e0195e9106e3977330a3ebab6fe3c3c"} Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.943942 4698 scope.go:117] "RemoveContainer" containerID="705600d9a11a28b138604620bdbf08cf716ef256f5519b1158b65796bdc7b095" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.955059 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579477885f-87tbw" event={"ID":"5d434aa2-b7eb-424d-930a-25be01006019","Type":"ContainerDied","Data":"bc5439765d0c463b88edfef42904ceac0ab614fd5177e99e53e71541cfee79e1"} Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.955192 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.977356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerStarted","Data":"07a6855d7507772e6bfae39133c7ebef70fc374c3f8f5e20742844418bb02c36"} Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.980003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerStarted","Data":"78b7f571254d3fec6d37975858eed38a61d6b41bae28e704358b6f920e367de5"} Jan 27 14:56:28 crc kubenswrapper[4698]: I0127 14:56:28.981608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b474dc0-6d40-42e8-9821-da0aa930095e","Type":"ContainerStarted","Data":"0f6bb5daae84a54d7a70ee7a5b4e92b52815243087510d4806ee6f427dc51c68"} Jan 27 14:56:28 crc kubenswrapper[4698]: E0127 14:56:28.992858 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-scheduler-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.111:5001/podified-master-centos10/openstack-nova-scheduler:watcher_latest\\\"\"" pod="openstack/nova-scheduler-0" podUID="d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.013786 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xls4d" podStartSLOduration=4.230096591 podStartE2EDuration="21.013760034s" podCreationTimestamp="2026-01-27 14:56:08 +0000 UTC" firstStartedPulling="2026-01-27 14:56:11.598336614 +0000 UTC m=+1627.275114079" lastFinishedPulling="2026-01-27 14:56:28.382000057 +0000 UTC m=+1644.058777522" observedRunningTime="2026-01-27 14:56:29.005308632 +0000 UTC m=+1644.682086097" watchObservedRunningTime="2026-01-27 14:56:29.013760034 +0000 UTC m=+1644.690537489" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.016907 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f246979f-482d-4a82-877a-a813b522f3dd" path="/var/lib/kubelet/pods/f246979f-482d-4a82-877a-a813b522f3dd/volumes" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.110630 4698 scope.go:117] "RemoveContainer" containerID="830cdbcd537f41afcbbd7052727a7601594ada148c838ec7ad7f81f189979ac8" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.201321 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.246646 4698 scope.go:117] "RemoveContainer" containerID="48f36a0e585de910fd7d70e01cef71bc061b13e451d14091212f83fe12c8ec76" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.268908 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zvwn"] Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.315886 4698 scope.go:117] "RemoveContainer" containerID="750fcb73cb35140556f9fb8158b8fdf9210bf36bc6358e595baad4cc4d8a6683" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.355415 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.355474 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.355744 4698 scope.go:117] "RemoveContainer" containerID="ca6eb8fb804eb01ac9beab50ca6c50c0ea410fa35f6b0ec6786bfb8094cbd96c" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.372599 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.498600 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle\") pod \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.498737 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data\") pod \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.498779 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs\") pod \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.498984 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87m6c\" (UniqueName: \"kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c\") pod \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\" (UID: \"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0\") " Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.499222 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs" (OuterVolumeSpecName: "logs") pod "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" (UID: "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.499524 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.505658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data" (OuterVolumeSpecName: "config-data") pod "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" (UID: "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.505818 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c" (OuterVolumeSpecName: "kube-api-access-87m6c") pod "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" (UID: "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0"). InnerVolumeSpecName "kube-api-access-87m6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.508881 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" (UID: "bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.602040 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87m6c\" (UniqueName: \"kubernetes.io/projected/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-kube-api-access-87m6c\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.602089 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.602102 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.997120 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerStarted","Data":"94529a3732b71a3e076e21cec1c296a9493fcab8e24ec5be09857e1fb0ac0a9b"} Jan 27 14:56:29 crc kubenswrapper[4698]: I0127 14:56:29.999596 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b474dc0-6d40-42e8-9821-da0aa930095e","Type":"ContainerStarted","Data":"52231a3556c0e2924c18072b1ff2025a083f3615053362ac22022b00990fc4c4"} Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.001856 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.001859 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0","Type":"ContainerDied","Data":"cb23e5ab69bf6eeb828a482fadd4460c0dcc54717d2d4c2733de37f82c5e164d"} Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.033986 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.480170035 podStartE2EDuration="18.03396509s" podCreationTimestamp="2026-01-27 14:56:12 +0000 UTC" firstStartedPulling="2026-01-27 14:56:14.647853122 +0000 UTC m=+1630.324630587" lastFinishedPulling="2026-01-27 14:56:28.201648177 +0000 UTC m=+1643.878425642" observedRunningTime="2026-01-27 14:56:30.023792623 +0000 UTC m=+1645.700570108" watchObservedRunningTime="2026-01-27 14:56:30.03396509 +0000 UTC m=+1645.710742575" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.051095 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.926011699 podStartE2EDuration="3.051072052s" podCreationTimestamp="2026-01-27 14:56:27 +0000 UTC" firstStartedPulling="2026-01-27 14:56:28.791595953 +0000 UTC m=+1644.468373418" lastFinishedPulling="2026-01-27 14:56:28.916656306 +0000 UTC m=+1644.593433771" observedRunningTime="2026-01-27 14:56:30.047526168 +0000 UTC m=+1645.724303643" watchObservedRunningTime="2026-01-27 14:56:30.051072052 +0000 UTC m=+1645.727849517" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.112997 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.160944 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.181855 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:30 crc kubenswrapper[4698]: E0127 14:56:30.182713 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="extract-utilities" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.182741 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="extract-utilities" Jan 27 14:56:30 crc kubenswrapper[4698]: E0127 14:56:30.182760 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="registry-server" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.182768 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="registry-server" Jan 27 14:56:30 crc kubenswrapper[4698]: E0127 14:56:30.182825 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="init" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.182835 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="init" Jan 27 14:56:30 crc kubenswrapper[4698]: E0127 14:56:30.182852 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="extract-content" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.182858 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="extract-content" Jan 27 14:56:30 crc kubenswrapper[4698]: E0127 14:56:30.182881 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="dnsmasq-dns" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.182890 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="dnsmasq-dns" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.183191 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8369306e-5bbe-4677-b605-977728860668" containerName="registry-server" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.183224 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d434aa2-b7eb-424d-930a-25be01006019" containerName="dnsmasq-dns" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.185407 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.189311 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.189369 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.193034 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.251785 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.251917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.251980 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.252022 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.252688 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsv4\" (UniqueName: \"kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.354870 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsv4\" (UniqueName: \"kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.355470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.356267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.356548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.356627 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.357188 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.362709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.369697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.384042 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.385364 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsv4\" (UniqueName: \"kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4\") pod \"nova-metadata-0\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " pod="openstack/nova-metadata-0" Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.415514 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xls4d" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="registry-server" probeResult="failure" output=< Jan 27 14:56:30 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 14:56:30 crc kubenswrapper[4698]: > Jan 27 14:56:30 crc kubenswrapper[4698]: I0127 14:56:30.539549 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:31 crc kubenswrapper[4698]: I0127 14:56:31.006000 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8369306e-5bbe-4677-b605-977728860668" path="/var/lib/kubelet/pods/8369306e-5bbe-4677-b605-977728860668/volumes" Jan 27 14:56:31 crc kubenswrapper[4698]: I0127 14:56:31.007375 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0" path="/var/lib/kubelet/pods/bfa7ceda-a38c-4336-a1c9-e17aaeca5ff0/volumes" Jan 27 14:56:31 crc kubenswrapper[4698]: I0127 14:56:31.055947 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:31 crc kubenswrapper[4698]: I0127 14:56:31.992545 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:56:31 crc kubenswrapper[4698]: E0127 14:56:31.993519 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:56:32 crc kubenswrapper[4698]: I0127 14:56:32.049339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerStarted","Data":"5cf1c3036eba595f1ffae18d81dfa7c96c748c5cc5a1b1e6fc9992fd20150087"} Jan 27 14:56:32 crc kubenswrapper[4698]: I0127 14:56:32.049750 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerStarted","Data":"cd661b4f5abc76cebc2596d90b6e7bc008b3e1a7b60cdaaaa319b394e7b8a955"} Jan 27 14:56:32 crc kubenswrapper[4698]: I0127 14:56:32.049886 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerStarted","Data":"ba1d4200bc8a714887e33bd488f1fe543e789d4c421622a4b34d7cc44abead1a"} Jan 27 14:56:32 crc kubenswrapper[4698]: I0127 14:56:32.636756 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:33 crc kubenswrapper[4698]: I0127 14:56:33.091280 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.091235334 podStartE2EDuration="3.091235334s" podCreationTimestamp="2026-01-27 14:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:33.081794885 +0000 UTC m=+1648.758572370" watchObservedRunningTime="2026-01-27 14:56:33.091235334 +0000 UTC m=+1648.768012799" Jan 27 14:56:33 crc kubenswrapper[4698]: I0127 14:56:33.799211 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:56:33 crc kubenswrapper[4698]: I0127 14:56:33.799279 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:56:34 crc kubenswrapper[4698]: I0127 14:56:34.881907 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:34 crc kubenswrapper[4698]: I0127 14:56:34.881907 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:35 crc kubenswrapper[4698]: I0127 14:56:35.078153 4698 generic.go:334] "Generic (PLEG): container finished" podID="c865f75e-a196-4b4c-ba96-383654e3c295" containerID="746bd03d270eda154fe1289f278408ec545c697eed0de8744ca89b34b7eee904" exitCode=0 Jan 27 14:56:35 crc kubenswrapper[4698]: I0127 14:56:35.078203 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgm7r" event={"ID":"c865f75e-a196-4b4c-ba96-383654e3c295","Type":"ContainerDied","Data":"746bd03d270eda154fe1289f278408ec545c697eed0de8744ca89b34b7eee904"} Jan 27 14:56:35 crc kubenswrapper[4698]: I0127 14:56:35.540546 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:56:35 crc kubenswrapper[4698]: I0127 14:56:35.541713 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.501990 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.599280 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts\") pod \"c865f75e-a196-4b4c-ba96-383654e3c295\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.599672 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle\") pod \"c865f75e-a196-4b4c-ba96-383654e3c295\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.599710 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdh5h\" (UniqueName: \"kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h\") pod \"c865f75e-a196-4b4c-ba96-383654e3c295\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.599784 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data\") pod \"c865f75e-a196-4b4c-ba96-383654e3c295\" (UID: \"c865f75e-a196-4b4c-ba96-383654e3c295\") " Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.607706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h" (OuterVolumeSpecName: "kube-api-access-xdh5h") pod "c865f75e-a196-4b4c-ba96-383654e3c295" (UID: "c865f75e-a196-4b4c-ba96-383654e3c295"). InnerVolumeSpecName "kube-api-access-xdh5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.607875 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts" (OuterVolumeSpecName: "scripts") pod "c865f75e-a196-4b4c-ba96-383654e3c295" (UID: "c865f75e-a196-4b4c-ba96-383654e3c295"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.635043 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data" (OuterVolumeSpecName: "config-data") pod "c865f75e-a196-4b4c-ba96-383654e3c295" (UID: "c865f75e-a196-4b4c-ba96-383654e3c295"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.636714 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c865f75e-a196-4b4c-ba96-383654e3c295" (UID: "c865f75e-a196-4b4c-ba96-383654e3c295"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.702784 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.703235 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdh5h\" (UniqueName: \"kubernetes.io/projected/c865f75e-a196-4b4c-ba96-383654e3c295-kube-api-access-xdh5h\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.703254 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:36 crc kubenswrapper[4698]: I0127 14:56:36.703264 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c865f75e-a196-4b4c-ba96-383654e3c295-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.098383 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgm7r" event={"ID":"c865f75e-a196-4b4c-ba96-383654e3c295","Type":"ContainerDied","Data":"3fde055dd80b76215893fbe688ed0dbe7e8975a9c28c0b7a1b5f8fc965e7f7d5"} Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.098417 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fde055dd80b76215893fbe688ed0dbe7e8975a9c28c0b7a1b5f8fc965e7f7d5" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.098720 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgm7r" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.211059 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.226965 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.227226 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-log" containerID="cri-o://78b7f571254d3fec6d37975858eed38a61d6b41bae28e704358b6f920e367de5" gracePeriod=30 Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.227417 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-api" containerID="cri-o://94529a3732b71a3e076e21cec1c296a9493fcab8e24ec5be09857e1fb0ac0a9b" gracePeriod=30 Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.244776 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.245269 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-log" containerID="cri-o://cd661b4f5abc76cebc2596d90b6e7bc008b3e1a7b60cdaaaa319b394e7b8a955" gracePeriod=30 Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.245517 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-metadata" containerID="cri-o://5cf1c3036eba595f1ffae18d81dfa7c96c748c5cc5a1b1e6fc9992fd20150087" gracePeriod=30 Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.641924 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.676934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.681631 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.825682 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zdv\" (UniqueName: \"kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv\") pod \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.825814 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle\") pod \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.825969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data\") pod \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\" (UID: \"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2\") " Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.831711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data" (OuterVolumeSpecName: "config-data") pod "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" (UID: "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.831791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" (UID: "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.832100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv" (OuterVolumeSpecName: "kube-api-access-s4zdv") pod "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" (UID: "d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2"). InnerVolumeSpecName "kube-api-access-s4zdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.928755 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.928803 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zdv\" (UniqueName: \"kubernetes.io/projected/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-kube-api-access-s4zdv\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:37 crc kubenswrapper[4698]: I0127 14:56:37.929018 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.110258 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerID="cd661b4f5abc76cebc2596d90b6e7bc008b3e1a7b60cdaaaa319b394e7b8a955" exitCode=143 Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.110380 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerDied","Data":"cd661b4f5abc76cebc2596d90b6e7bc008b3e1a7b60cdaaaa319b394e7b8a955"} Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.113370 4698 generic.go:334] "Generic (PLEG): container finished" podID="65866040-440d-4bca-91f4-944cfce917cb" containerID="78b7f571254d3fec6d37975858eed38a61d6b41bae28e704358b6f920e367de5" exitCode=143 Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.113446 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerDied","Data":"78b7f571254d3fec6d37975858eed38a61d6b41bae28e704358b6f920e367de5"} Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.116476 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.116554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2","Type":"ContainerDied","Data":"c980d4bb1f26cd01816e78bb0c4aa985519248359daa636536e25bd9ef508885"} Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.137268 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.174810 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.191041 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.199802 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:38 crc kubenswrapper[4698]: E0127 14:56:38.200574 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c865f75e-a196-4b4c-ba96-383654e3c295" containerName="nova-manage" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.200600 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c865f75e-a196-4b4c-ba96-383654e3c295" containerName="nova-manage" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.200881 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c865f75e-a196-4b4c-ba96-383654e3c295" containerName="nova-manage" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.201862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.209688 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.222701 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.336876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbhx\" (UniqueName: \"kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.336943 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.337086 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.439400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.439603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbhx\" (UniqueName: \"kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.439758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.444754 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.445287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.463629 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbhx\" (UniqueName: \"kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx\") pod \"nova-scheduler-0\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " pod="openstack/nova-scheduler-0" Jan 27 14:56:38 crc kubenswrapper[4698]: I0127 14:56:38.530433 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.004444 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2" path="/var/lib/kubelet/pods/d1bd366a-868e-4d55-8e1c-b7bbdeecf1c2/volumes" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.038230 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.129491 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerID="5cf1c3036eba595f1ffae18d81dfa7c96c748c5cc5a1b1e6fc9992fd20150087" exitCode=0 Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.129591 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerDied","Data":"5cf1c3036eba595f1ffae18d81dfa7c96c748c5cc5a1b1e6fc9992fd20150087"} Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.131823 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20051e2d-581c-4d5b-8259-972c12bef429","Type":"ContainerStarted","Data":"730d07e335ce9681571e40b73bdc31eb6e2b0f3c5393200e0c6b99644c23512f"} Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.219458 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.358327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs\") pod \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.358514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle\") pod \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.358577 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqsv4\" (UniqueName: \"kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4\") pod \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.358698 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data\") pod \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.358856 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs\") pod \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\" (UID: \"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36\") " Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.359421 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs" (OuterVolumeSpecName: "logs") pod "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" (UID: "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.363713 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4" (OuterVolumeSpecName: "kube-api-access-qqsv4") pod "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" (UID: "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36"). InnerVolumeSpecName "kube-api-access-qqsv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.387658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data" (OuterVolumeSpecName: "config-data") pod "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" (UID: "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.392514 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" (UID: "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.411041 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.429849 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" (UID: "bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.461671 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.461707 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.461721 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.461736 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqsv4\" (UniqueName: \"kubernetes.io/projected/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-kube-api-access-qqsv4\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.461748 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4698]: I0127 14:56:39.464911 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.144147 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36","Type":"ContainerDied","Data":"ba1d4200bc8a714887e33bd488f1fe543e789d4c421622a4b34d7cc44abead1a"} Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.144213 4698 scope.go:117] "RemoveContainer" containerID="5cf1c3036eba595f1ffae18d81dfa7c96c748c5cc5a1b1e6fc9992fd20150087" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.144327 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.181117 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.197781 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.211383 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:40 crc kubenswrapper[4698]: E0127 14:56:40.211974 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-log" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.212011 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-log" Jan 27 14:56:40 crc kubenswrapper[4698]: E0127 14:56:40.212037 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-metadata" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.212043 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-metadata" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.212448 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-metadata" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.212485 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" containerName="nova-metadata-log" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.213827 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.217825 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.217836 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.271285 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.283755 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.380652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.380749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.380901 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.380991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.381198 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8j84\" (UniqueName: \"kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.483759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.483871 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.483935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8j84\" (UniqueName: \"kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.484078 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.484477 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.484936 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.488541 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.490107 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.491358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.500992 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8j84\" (UniqueName: \"kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84\") pod \"nova-metadata-0\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.550465 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:56:40 crc kubenswrapper[4698]: I0127 14:56:40.706025 4698 scope.go:117] "RemoveContainer" containerID="cd661b4f5abc76cebc2596d90b6e7bc008b3e1a7b60cdaaaa319b394e7b8a955" Jan 27 14:56:41 crc kubenswrapper[4698]: I0127 14:56:41.005542 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36" path="/var/lib/kubelet/pods/bb7eb1e4-2394-4979-b1e3-3a4edd6d1f36/volumes" Jan 27 14:56:41 crc kubenswrapper[4698]: I0127 14:56:41.156788 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20051e2d-581c-4d5b-8259-972c12bef429","Type":"ContainerStarted","Data":"5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7"} Jan 27 14:56:41 crc kubenswrapper[4698]: I0127 14:56:41.156995 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xls4d" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="registry-server" containerID="cri-o://07a6855d7507772e6bfae39133c7ebef70fc374c3f8f5e20742844418bb02c36" gracePeriod=2 Jan 27 14:56:41 crc kubenswrapper[4698]: I0127 14:56:41.176000 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.494712027 podStartE2EDuration="3.175981193s" podCreationTimestamp="2026-01-27 14:56:38 +0000 UTC" firstStartedPulling="2026-01-27 14:56:39.039572481 +0000 UTC m=+1654.716349946" lastFinishedPulling="2026-01-27 14:56:40.720841647 +0000 UTC m=+1656.397619112" observedRunningTime="2026-01-27 14:56:41.174920786 +0000 UTC m=+1656.851698251" watchObservedRunningTime="2026-01-27 14:56:41.175981193 +0000 UTC m=+1656.852758658" Jan 27 14:56:41 crc kubenswrapper[4698]: I0127 14:56:41.231006 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.187325 4698 generic.go:334] "Generic (PLEG): container finished" podID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerID="07a6855d7507772e6bfae39133c7ebef70fc374c3f8f5e20742844418bb02c36" exitCode=0 Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.187411 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerDied","Data":"07a6855d7507772e6bfae39133c7ebef70fc374c3f8f5e20742844418bb02c36"} Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.193764 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerStarted","Data":"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6"} Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.193800 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerStarted","Data":"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068"} Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.193810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerStarted","Data":"3155700640b991b5ee3b5dfb97145f869ff02b4f1d95970ce1ed18569637fd8f"} Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.217245 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.217227995 podStartE2EDuration="2.217227995s" podCreationTimestamp="2026-01-27 14:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:42.215295484 +0000 UTC m=+1657.892072979" watchObservedRunningTime="2026-01-27 14:56:42.217227995 +0000 UTC m=+1657.894005460" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.264797 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.430669 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content\") pod \"187d8b4f-4757-45bf-a23b-b35702a18f93\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.431184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vjq5\" (UniqueName: \"kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5\") pod \"187d8b4f-4757-45bf-a23b-b35702a18f93\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.431305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities\") pod \"187d8b4f-4757-45bf-a23b-b35702a18f93\" (UID: \"187d8b4f-4757-45bf-a23b-b35702a18f93\") " Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.432463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities" (OuterVolumeSpecName: "utilities") pod "187d8b4f-4757-45bf-a23b-b35702a18f93" (UID: "187d8b4f-4757-45bf-a23b-b35702a18f93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.437831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5" (OuterVolumeSpecName: "kube-api-access-8vjq5") pod "187d8b4f-4757-45bf-a23b-b35702a18f93" (UID: "187d8b4f-4757-45bf-a23b-b35702a18f93"). InnerVolumeSpecName "kube-api-access-8vjq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.456055 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "187d8b4f-4757-45bf-a23b-b35702a18f93" (UID: "187d8b4f-4757-45bf-a23b-b35702a18f93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.534734 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vjq5\" (UniqueName: \"kubernetes.io/projected/187d8b4f-4757-45bf-a23b-b35702a18f93-kube-api-access-8vjq5\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.534799 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:42 crc kubenswrapper[4698]: I0127 14:56:42.534814 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187d8b4f-4757-45bf-a23b-b35702a18f93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.205395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xls4d" event={"ID":"187d8b4f-4757-45bf-a23b-b35702a18f93","Type":"ContainerDied","Data":"2409a1ce56112c05e2e8f2143f4182ae7526da6dc448b617c496b3fae69e6bf6"} Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.205449 4698 scope.go:117] "RemoveContainer" containerID="07a6855d7507772e6bfae39133c7ebef70fc374c3f8f5e20742844418bb02c36" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.205458 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xls4d" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.210181 4698 generic.go:334] "Generic (PLEG): container finished" podID="65866040-440d-4bca-91f4-944cfce917cb" containerID="94529a3732b71a3e076e21cec1c296a9493fcab8e24ec5be09857e1fb0ac0a9b" exitCode=0 Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.210311 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerDied","Data":"94529a3732b71a3e076e21cec1c296a9493fcab8e24ec5be09857e1fb0ac0a9b"} Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.231839 4698 scope.go:117] "RemoveContainer" containerID="13c3fd2a6740f6155f3816ba82eb087ee30b7f0de6bc34a85ce91b282340c826" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.248162 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.260743 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xls4d"] Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.372021 4698 scope.go:117] "RemoveContainer" containerID="f289095516f359591c44042298a3b24872517b85c1864d6ad70aeb7ed6b11444" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.530713 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.687292 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.863574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7qn\" (UniqueName: \"kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn\") pod \"65866040-440d-4bca-91f4-944cfce917cb\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.863734 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data\") pod \"65866040-440d-4bca-91f4-944cfce917cb\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.863961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs\") pod \"65866040-440d-4bca-91f4-944cfce917cb\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.864090 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle\") pod \"65866040-440d-4bca-91f4-944cfce917cb\" (UID: \"65866040-440d-4bca-91f4-944cfce917cb\") " Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.864956 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs" (OuterVolumeSpecName: "logs") pod "65866040-440d-4bca-91f4-944cfce917cb" (UID: "65866040-440d-4bca-91f4-944cfce917cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.865516 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65866040-440d-4bca-91f4-944cfce917cb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.869983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn" (OuterVolumeSpecName: "kube-api-access-jk7qn") pod "65866040-440d-4bca-91f4-944cfce917cb" (UID: "65866040-440d-4bca-91f4-944cfce917cb"). InnerVolumeSpecName "kube-api-access-jk7qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.897013 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data" (OuterVolumeSpecName: "config-data") pod "65866040-440d-4bca-91f4-944cfce917cb" (UID: "65866040-440d-4bca-91f4-944cfce917cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.914317 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65866040-440d-4bca-91f4-944cfce917cb" (UID: "65866040-440d-4bca-91f4-944cfce917cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.967103 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7qn\" (UniqueName: \"kubernetes.io/projected/65866040-440d-4bca-91f4-944cfce917cb-kube-api-access-jk7qn\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.967136 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:43 crc kubenswrapper[4698]: I0127 14:56:43.967148 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65866040-440d-4bca-91f4-944cfce917cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.222388 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.222412 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65866040-440d-4bca-91f4-944cfce917cb","Type":"ContainerDied","Data":"77af34cdd5f441b4ab929c53e6a46ed76206cf99a497d1ed8b99be2aef16d234"} Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.222501 4698 scope.go:117] "RemoveContainer" containerID="94529a3732b71a3e076e21cec1c296a9493fcab8e24ec5be09857e1fb0ac0a9b" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.270382 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.283383 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299255 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:44 crc kubenswrapper[4698]: E0127 14:56:44.299735 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-log" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299759 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-log" Jan 27 14:56:44 crc kubenswrapper[4698]: E0127 14:56:44.299781 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="extract-content" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299790 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="extract-content" Jan 27 14:56:44 crc kubenswrapper[4698]: E0127 14:56:44.299850 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="registry-server" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299859 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="registry-server" Jan 27 14:56:44 crc kubenswrapper[4698]: E0127 14:56:44.299870 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="extract-utilities" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299878 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="extract-utilities" Jan 27 14:56:44 crc kubenswrapper[4698]: E0127 14:56:44.299891 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-api" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.299899 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-api" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.300158 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-log" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.300182 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" containerName="registry-server" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.300214 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="65866040-440d-4bca-91f4-944cfce917cb" containerName="nova-api-api" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.301499 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.311756 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.314118 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.328594 4698 scope.go:117] "RemoveContainer" containerID="78b7f571254d3fec6d37975858eed38a61d6b41bae28e704358b6f920e367de5" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.477423 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.477799 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwt8m\" (UniqueName: \"kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.477854 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.478061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.580705 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.580784 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwt8m\" (UniqueName: \"kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.580849 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.580967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.581658 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.586458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.593468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.603613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwt8m\" (UniqueName: \"kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m\") pod \"nova-api-0\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " pod="openstack/nova-api-0" Jan 27 14:56:44 crc kubenswrapper[4698]: I0127 14:56:44.636344 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.004451 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187d8b4f-4757-45bf-a23b-b35702a18f93" path="/var/lib/kubelet/pods/187d8b4f-4757-45bf-a23b-b35702a18f93/volumes" Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.005774 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65866040-440d-4bca-91f4-944cfce917cb" path="/var/lib/kubelet/pods/65866040-440d-4bca-91f4-944cfce917cb/volumes" Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.122953 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.243336 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerStarted","Data":"c296eda9de6fde5290ddfbb8eda46bf637548102bc64e3940b3c952f37532741"} Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.550578 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.550891 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:56:45 crc kubenswrapper[4698]: I0127 14:56:45.994069 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:56:45 crc kubenswrapper[4698]: E0127 14:56:45.994597 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:56:46 crc kubenswrapper[4698]: I0127 14:56:46.256786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerStarted","Data":"612d2191cf839b24a24c85ccffb326bd59759ea7417b62a9211279dd0b2631c5"} Jan 27 14:56:46 crc kubenswrapper[4698]: I0127 14:56:46.256843 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerStarted","Data":"2f1165e66864b600b88d07c445a1a80e3bdab4fac969a095abe76252dae2e94b"} Jan 27 14:56:46 crc kubenswrapper[4698]: I0127 14:56:46.295181 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.295161356 podStartE2EDuration="2.295161356s" podCreationTimestamp="2026-01-27 14:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:56:46.293546884 +0000 UTC m=+1661.970324369" watchObservedRunningTime="2026-01-27 14:56:46.295161356 +0000 UTC m=+1661.971938821" Jan 27 14:56:48 crc kubenswrapper[4698]: I0127 14:56:48.531721 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:56:48 crc kubenswrapper[4698]: I0127 14:56:48.561147 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:56:49 crc kubenswrapper[4698]: I0127 14:56:49.311494 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:56:50 crc kubenswrapper[4698]: I0127 14:56:50.551336 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:56:50 crc kubenswrapper[4698]: I0127 14:56:50.551377 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:56:51 crc kubenswrapper[4698]: I0127 14:56:51.563882 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:51 crc kubenswrapper[4698]: I0127 14:56:51.563999 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:54 crc kubenswrapper[4698]: I0127 14:56:54.636854 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:56:54 crc kubenswrapper[4698]: I0127 14:56:54.637175 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:56:55 crc kubenswrapper[4698]: I0127 14:56:55.719932 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:55 crc kubenswrapper[4698]: I0127 14:56:55.719970 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:56:56 crc kubenswrapper[4698]: I0127 14:56:56.351324 4698 generic.go:334] "Generic (PLEG): container finished" podID="24589df3-de69-4037-a263-2c08e46fc8ce" containerID="b279f31de2d88d810b9d3b00bccd2c9b249ab8c4f36e1205b3db42a12dec02ee" exitCode=0 Jan 27 14:56:56 crc kubenswrapper[4698]: I0127 14:56:56.351379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85rx4" event={"ID":"24589df3-de69-4037-a263-2c08e46fc8ce","Type":"ContainerDied","Data":"b279f31de2d88d810b9d3b00bccd2c9b249ab8c4f36e1205b3db42a12dec02ee"} Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.795420 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.937984 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle\") pod \"24589df3-de69-4037-a263-2c08e46fc8ce\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.938085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqd4\" (UniqueName: \"kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4\") pod \"24589df3-de69-4037-a263-2c08e46fc8ce\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.938311 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts\") pod \"24589df3-de69-4037-a263-2c08e46fc8ce\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.938353 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data\") pod \"24589df3-de69-4037-a263-2c08e46fc8ce\" (UID: \"24589df3-de69-4037-a263-2c08e46fc8ce\") " Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.944004 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts" (OuterVolumeSpecName: "scripts") pod "24589df3-de69-4037-a263-2c08e46fc8ce" (UID: "24589df3-de69-4037-a263-2c08e46fc8ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.945556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4" (OuterVolumeSpecName: "kube-api-access-4dqd4") pod "24589df3-de69-4037-a263-2c08e46fc8ce" (UID: "24589df3-de69-4037-a263-2c08e46fc8ce"). InnerVolumeSpecName "kube-api-access-4dqd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.970326 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data" (OuterVolumeSpecName: "config-data") pod "24589df3-de69-4037-a263-2c08e46fc8ce" (UID: "24589df3-de69-4037-a263-2c08e46fc8ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:57 crc kubenswrapper[4698]: I0127 14:56:57.975745 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24589df3-de69-4037-a263-2c08e46fc8ce" (UID: "24589df3-de69-4037-a263-2c08e46fc8ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.041846 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.042542 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqd4\" (UniqueName: \"kubernetes.io/projected/24589df3-de69-4037-a263-2c08e46fc8ce-kube-api-access-4dqd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.042568 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.042582 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24589df3-de69-4037-a263-2c08e46fc8ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.382243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85rx4" event={"ID":"24589df3-de69-4037-a263-2c08e46fc8ce","Type":"ContainerDied","Data":"c87d428bb815d9ac1130a7b63ca4955332e450ee04ba419788ccbb4bb2c2c1ce"} Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.382287 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87d428bb815d9ac1130a7b63ca4955332e450ee04ba419788ccbb4bb2c2c1ce" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.382310 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85rx4" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.461757 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:56:58 crc kubenswrapper[4698]: E0127 14:56:58.462299 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24589df3-de69-4037-a263-2c08e46fc8ce" containerName="nova-cell1-conductor-db-sync" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.462342 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="24589df3-de69-4037-a263-2c08e46fc8ce" containerName="nova-cell1-conductor-db-sync" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.462619 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="24589df3-de69-4037-a263-2c08e46fc8ce" containerName="nova-cell1-conductor-db-sync" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.463586 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.466067 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.479904 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.552179 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.552326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.552555 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghz6\" (UniqueName: \"kubernetes.io/projected/9dd428b1-641b-4e2a-a0cc-72629e7e091b-kube-api-access-lghz6\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.654830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghz6\" (UniqueName: \"kubernetes.io/projected/9dd428b1-641b-4e2a-a0cc-72629e7e091b-kube-api-access-lghz6\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.654904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.654966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.659436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.659537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd428b1-641b-4e2a-a0cc-72629e7e091b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.672552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghz6\" (UniqueName: \"kubernetes.io/projected/9dd428b1-641b-4e2a-a0cc-72629e7e091b-kube-api-access-lghz6\") pod \"nova-cell1-conductor-0\" (UID: \"9dd428b1-641b-4e2a-a0cc-72629e7e091b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.784289 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:56:58 crc kubenswrapper[4698]: I0127 14:56:58.993138 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:56:58 crc kubenswrapper[4698]: E0127 14:56:58.993428 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.094189 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5d434aa2-b7eb-424d-930a-25be01006019"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5d434aa2-b7eb-424d-930a-25be01006019] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5d434aa2_b7eb_424d_930a_25be01006019.slice" Jan 27 14:56:59 crc kubenswrapper[4698]: E0127 14:56:59.094242 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5d434aa2-b7eb-424d-930a-25be01006019] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5d434aa2-b7eb-424d-930a-25be01006019] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5d434aa2_b7eb_424d_930a_25be01006019.slice" pod="openstack/dnsmasq-dns-579477885f-87tbw" podUID="5d434aa2-b7eb-424d-930a-25be01006019" Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.246320 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:56:59 crc kubenswrapper[4698]: W0127 14:56:59.250815 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dd428b1_641b_4e2a_a0cc_72629e7e091b.slice/crio-a0b5990e7ed70799bd63368cdb75c342fdc277ff4292abdae05c1ea8c42ee563 WatchSource:0}: Error finding container a0b5990e7ed70799bd63368cdb75c342fdc277ff4292abdae05c1ea8c42ee563: Status 404 returned error can't find the container with id a0b5990e7ed70799bd63368cdb75c342fdc277ff4292abdae05c1ea8c42ee563 Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.395324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9dd428b1-641b-4e2a-a0cc-72629e7e091b","Type":"ContainerStarted","Data":"a0b5990e7ed70799bd63368cdb75c342fdc277ff4292abdae05c1ea8c42ee563"} Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.395365 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579477885f-87tbw" Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.425706 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:56:59 crc kubenswrapper[4698]: I0127 14:56:59.439499 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-579477885f-87tbw"] Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.407455 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9dd428b1-641b-4e2a-a0cc-72629e7e091b","Type":"ContainerStarted","Data":"492b59f43f0df31ad9b50cb3cf967a3968c7e1f6d490c972c19e80dfeca13303"} Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.409507 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.433830 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4337997639999998 podStartE2EDuration="2.433799764s" podCreationTimestamp="2026-01-27 14:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:00.424780047 +0000 UTC m=+1676.101557532" watchObservedRunningTime="2026-01-27 14:57:00.433799764 +0000 UTC m=+1676.110577249" Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.556182 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.557431 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:57:00 crc kubenswrapper[4698]: I0127 14:57:00.563086 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:57:01 crc kubenswrapper[4698]: I0127 14:57:01.006529 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d434aa2-b7eb-424d-930a-25be01006019" path="/var/lib/kubelet/pods/5d434aa2-b7eb-424d-930a-25be01006019/volumes" Jan 27 14:57:01 crc kubenswrapper[4698]: I0127 14:57:01.423387 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:57:04 crc kubenswrapper[4698]: I0127 14:57:04.644381 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:57:04 crc kubenswrapper[4698]: I0127 14:57:04.645402 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:57:04 crc kubenswrapper[4698]: I0127 14:57:04.653462 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:57:04 crc kubenswrapper[4698]: I0127 14:57:04.661250 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.466182 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.476596 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.782731 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f9c77fd5-xrjct"] Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.784922 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.827002 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f9c77fd5-xrjct"] Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-nb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-swift-storage-0\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930406 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-sb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-svc\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8gh\" (UniqueName: \"kubernetes.io/projected/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-kube-api-access-st8gh\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:05 crc kubenswrapper[4698]: I0127 14:57:05.930779 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-config\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-sb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032501 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-svc\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8gh\" (UniqueName: \"kubernetes.io/projected/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-kube-api-access-st8gh\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-config\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-nb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.032759 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-swift-storage-0\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.034848 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-sb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.034908 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-swift-storage-0\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.035711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-config\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.036343 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-ovsdbserver-nb\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.037350 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-dns-svc\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.078819 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8gh\" (UniqueName: \"kubernetes.io/projected/ad4426b9-0d4e-4a48-8f7a-fdb0febd44da-kube-api-access-st8gh\") pod \"dnsmasq-dns-84f9c77fd5-xrjct\" (UID: \"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da\") " pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.145874 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:06 crc kubenswrapper[4698]: I0127 14:57:06.976790 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f9c77fd5-xrjct"] Jan 27 14:57:07 crc kubenswrapper[4698]: W0127 14:57:07.096796 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad4426b9_0d4e_4a48_8f7a_fdb0febd44da.slice/crio-ad540d55bdd674e12a2bd4cb069f0d876e01d34dbc874cd86696fe9fef2d7079 WatchSource:0}: Error finding container ad540d55bdd674e12a2bd4cb069f0d876e01d34dbc874cd86696fe9fef2d7079: Status 404 returned error can't find the container with id ad540d55bdd674e12a2bd4cb069f0d876e01d34dbc874cd86696fe9fef2d7079 Jan 27 14:57:07 crc kubenswrapper[4698]: I0127 14:57:07.498426 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" event={"ID":"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da","Type":"ContainerStarted","Data":"ad540d55bdd674e12a2bd4cb069f0d876e01d34dbc874cd86696fe9fef2d7079"} Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.489152 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.493977 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="sg-core" containerID="cri-o://b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.493984 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-notification-agent" containerID="cri-o://fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.494176 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-central-agent" containerID="cri-o://e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.494365 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="proxy-httpd" containerID="cri-o://a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.512317 4698 generic.go:334] "Generic (PLEG): container finished" podID="ad4426b9-0d4e-4a48-8f7a-fdb0febd44da" containerID="ca04de9e3e2b8f3b07f039439886d36451a43dbfa66bf00786f94d40139a48f7" exitCode=0 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.512387 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" event={"ID":"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da","Type":"ContainerDied","Data":"ca04de9e3e2b8f3b07f039439886d36451a43dbfa66bf00786f94d40139a48f7"} Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.640622 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.208:3000/\": dial tcp 10.217.0.208:3000: connect: connection refused" Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.683849 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.684451 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-log" containerID="cri-o://2f1165e66864b600b88d07c445a1a80e3bdab4fac969a095abe76252dae2e94b" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.684625 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-api" containerID="cri-o://612d2191cf839b24a24c85ccffb326bd59759ea7417b62a9211279dd0b2631c5" gracePeriod=30 Jan 27 14:57:08 crc kubenswrapper[4698]: I0127 14:57:08.827301 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524351 4698 generic.go:334] "Generic (PLEG): container finished" podID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerID="a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801" exitCode=0 Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524382 4698 generic.go:334] "Generic (PLEG): container finished" podID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerID="b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa" exitCode=2 Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524389 4698 generic.go:334] "Generic (PLEG): container finished" podID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerID="e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd" exitCode=0 Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524458 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerDied","Data":"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801"} Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerDied","Data":"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa"} Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.524503 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerDied","Data":"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd"} Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.527401 4698 generic.go:334] "Generic (PLEG): container finished" podID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerID="2f1165e66864b600b88d07c445a1a80e3bdab4fac969a095abe76252dae2e94b" exitCode=143 Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.527482 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerDied","Data":"2f1165e66864b600b88d07c445a1a80e3bdab4fac969a095abe76252dae2e94b"} Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.530029 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" event={"ID":"ad4426b9-0d4e-4a48-8f7a-fdb0febd44da","Type":"ContainerStarted","Data":"84b70fc70835837da4423fc423487d2a6704a0bc6b8c813289790ca407a04069"} Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.531304 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:09 crc kubenswrapper[4698]: I0127 14:57:09.557351 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" podStartSLOduration=4.557329451 podStartE2EDuration="4.557329451s" podCreationTimestamp="2026-01-27 14:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:09.555829922 +0000 UTC m=+1685.232607387" watchObservedRunningTime="2026-01-27 14:57:09.557329451 +0000 UTC m=+1685.234106926" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.474873 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hf2ls"] Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.476717 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.479048 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.479449 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.494080 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf2ls"] Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.545678 4698 generic.go:334] "Generic (PLEG): container finished" podID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerID="612d2191cf839b24a24c85ccffb326bd59759ea7417b62a9211279dd0b2631c5" exitCode=0 Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.545824 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerDied","Data":"612d2191cf839b24a24c85ccffb326bd59759ea7417b62a9211279dd0b2631c5"} Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.632220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.632282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7vw\" (UniqueName: \"kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.632481 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.632538 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.734774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.735142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.735284 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.735316 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7vw\" (UniqueName: \"kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.753316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.757588 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7vw\" (UniqueName: \"kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.757679 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.767244 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts\") pod \"nova-cell1-cell-mapping-hf2ls\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:10 crc kubenswrapper[4698]: I0127 14:57:10.802192 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.024704 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.142149 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle\") pod \"8918ed88-4255-4340-ba5d-361e6f424fcc\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.142371 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs\") pod \"8918ed88-4255-4340-ba5d-361e6f424fcc\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.142519 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data\") pod \"8918ed88-4255-4340-ba5d-361e6f424fcc\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.142616 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwt8m\" (UniqueName: \"kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m\") pod \"8918ed88-4255-4340-ba5d-361e6f424fcc\" (UID: \"8918ed88-4255-4340-ba5d-361e6f424fcc\") " Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.147839 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs" (OuterVolumeSpecName: "logs") pod "8918ed88-4255-4340-ba5d-361e6f424fcc" (UID: "8918ed88-4255-4340-ba5d-361e6f424fcc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.156259 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m" (OuterVolumeSpecName: "kube-api-access-pwt8m") pod "8918ed88-4255-4340-ba5d-361e6f424fcc" (UID: "8918ed88-4255-4340-ba5d-361e6f424fcc"). InnerVolumeSpecName "kube-api-access-pwt8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.246207 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8918ed88-4255-4340-ba5d-361e6f424fcc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.246263 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwt8m\" (UniqueName: \"kubernetes.io/projected/8918ed88-4255-4340-ba5d-361e6f424fcc-kube-api-access-pwt8m\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.270232 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data" (OuterVolumeSpecName: "config-data") pod "8918ed88-4255-4340-ba5d-361e6f424fcc" (UID: "8918ed88-4255-4340-ba5d-361e6f424fcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.318815 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8918ed88-4255-4340-ba5d-361e6f424fcc" (UID: "8918ed88-4255-4340-ba5d-361e6f424fcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.348197 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.348248 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8918ed88-4255-4340-ba5d-361e6f424fcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.401861 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf2ls"] Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.562085 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf2ls" event={"ID":"d02bd3e6-3943-4d72-a596-ad3b1ca55805","Type":"ContainerStarted","Data":"3d42fc96f0ffcce975bcf45845bc0f289cce0537975d3f334e72d3518c4286a3"} Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.571817 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.571817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8918ed88-4255-4340-ba5d-361e6f424fcc","Type":"ContainerDied","Data":"c296eda9de6fde5290ddfbb8eda46bf637548102bc64e3940b3c952f37532741"} Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.571956 4698 scope.go:117] "RemoveContainer" containerID="612d2191cf839b24a24c85ccffb326bd59759ea7417b62a9211279dd0b2631c5" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.613051 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.623918 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.652830 4698 scope.go:117] "RemoveContainer" containerID="2f1165e66864b600b88d07c445a1a80e3bdab4fac969a095abe76252dae2e94b" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.659717 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:11 crc kubenswrapper[4698]: E0127 14:57:11.660195 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-api" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.660217 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-api" Jan 27 14:57:11 crc kubenswrapper[4698]: E0127 14:57:11.660260 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-log" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.660270 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-log" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.660495 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-log" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.660521 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" containerName="nova-api-api" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.661939 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.679570 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.679818 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.679990 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.681783 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858266 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858771 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56cs\" (UniqueName: \"kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858812 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.858977 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.960821 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.960930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56cs\" (UniqueName: \"kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.960975 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.961272 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.961337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.961361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.962084 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.967694 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.968211 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.979545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.983556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.985054 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56cs\" (UniqueName: \"kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs\") pod \"nova-api-0\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " pod="openstack/nova-api-0" Jan 27 14:57:11 crc kubenswrapper[4698]: I0127 14:57:11.993661 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:57:11 crc kubenswrapper[4698]: E0127 14:57:11.993880 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:57:12 crc kubenswrapper[4698]: I0127 14:57:12.006097 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:12 crc kubenswrapper[4698]: I0127 14:57:12.536141 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:12 crc kubenswrapper[4698]: I0127 14:57:12.586860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf2ls" event={"ID":"d02bd3e6-3943-4d72-a596-ad3b1ca55805","Type":"ContainerStarted","Data":"12c7a9f705d12e6fb43fab18d38df89dc45d0ea718b4bd098e536c0b5407f07e"} Jan 27 14:57:12 crc kubenswrapper[4698]: I0127 14:57:12.590434 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerStarted","Data":"eea2bd491317d80c88973a251efff6b23aef9eafd8db771d453a260187e78330"} Jan 27 14:57:12 crc kubenswrapper[4698]: I0127 14:57:12.611750 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hf2ls" podStartSLOduration=2.611729938 podStartE2EDuration="2.611729938s" podCreationTimestamp="2026-01-27 14:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:12.60420498 +0000 UTC m=+1688.280982445" watchObservedRunningTime="2026-01-27 14:57:12.611729938 +0000 UTC m=+1688.288507403" Jan 27 14:57:13 crc kubenswrapper[4698]: I0127 14:57:13.006065 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8918ed88-4255-4340-ba5d-361e6f424fcc" path="/var/lib/kubelet/pods/8918ed88-4255-4340-ba5d-361e6f424fcc/volumes" Jan 27 14:57:13 crc kubenswrapper[4698]: I0127 14:57:13.604851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerStarted","Data":"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58"} Jan 27 14:57:13 crc kubenswrapper[4698]: I0127 14:57:13.605227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerStarted","Data":"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49"} Jan 27 14:57:13 crc kubenswrapper[4698]: I0127 14:57:13.639248 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.639227137 podStartE2EDuration="2.639227137s" podCreationTimestamp="2026-01-27 14:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:13.624445387 +0000 UTC m=+1689.301222862" watchObservedRunningTime="2026-01-27 14:57:13.639227137 +0000 UTC m=+1689.316004602" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.127973 4698 scope.go:117] "RemoveContainer" containerID="87187a5a2296882e19cec5a45ad68dc3000dfce0034be1365dd20f36574d0e1f" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.144118 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330215 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh775\" (UniqueName: \"kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330324 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330493 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330680 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330742 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330769 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.330809 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle\") pod \"505ff24d-a299-4255-8d5a-9b52ff443b07\" (UID: \"505ff24d-a299-4255-8d5a-9b52ff443b07\") " Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.331770 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.331829 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.336995 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775" (OuterVolumeSpecName: "kube-api-access-wh775") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "kube-api-access-wh775". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.337826 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts" (OuterVolumeSpecName: "scripts") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.370831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.403977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.431163 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433668 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433703 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh775\" (UniqueName: \"kubernetes.io/projected/505ff24d-a299-4255-8d5a-9b52ff443b07-kube-api-access-wh775\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433719 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433732 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433743 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433754 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/505ff24d-a299-4255-8d5a-9b52ff443b07-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.433765 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.460377 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data" (OuterVolumeSpecName: "config-data") pod "505ff24d-a299-4255-8d5a-9b52ff443b07" (UID: "505ff24d-a299-4255-8d5a-9b52ff443b07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.536048 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505ff24d-a299-4255-8d5a-9b52ff443b07-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.617582 4698 generic.go:334] "Generic (PLEG): container finished" podID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerID="fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda" exitCode=0 Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.617626 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerDied","Data":"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda"} Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.617688 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"505ff24d-a299-4255-8d5a-9b52ff443b07","Type":"ContainerDied","Data":"6af873d864b983b2c93f61ec532fba1a4dbaaf6a088f8b5b826d0258f12999ae"} Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.617687 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.617711 4698 scope.go:117] "RemoveContainer" containerID="a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.657311 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.659569 4698 scope.go:117] "RemoveContainer" containerID="b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.667679 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692030 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.692579 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-central-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692605 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-central-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.692622 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="sg-core" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692630 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="sg-core" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.692682 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="proxy-httpd" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692690 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="proxy-httpd" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.692703 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-notification-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692713 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-notification-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692939 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="sg-core" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692965 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-notification-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692979 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="proxy-httpd" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.692997 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" containerName="ceilometer-central-agent" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.695362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.697873 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.698090 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.698316 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.706036 4698 scope.go:117] "RemoveContainer" containerID="fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.709420 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.741934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-scripts\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-run-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-log-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742062 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-config-data\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742271 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.742308 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ms8\" (UniqueName: \"kubernetes.io/projected/c5569e41-49e9-4044-b173-babb897afb4f-kube-api-access-48ms8\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.760625 4698 scope.go:117] "RemoveContainer" containerID="e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.783273 4698 scope.go:117] "RemoveContainer" containerID="a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.783873 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801\": container with ID starting with a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801 not found: ID does not exist" containerID="a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.783924 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801"} err="failed to get container status \"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801\": rpc error: code = NotFound desc = could not find container \"a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801\": container with ID starting with a24516508216c0b1b1455018be375baab57b2b3772c29a795e79d4a700773801 not found: ID does not exist" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.783989 4698 scope.go:117] "RemoveContainer" containerID="b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.784504 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa\": container with ID starting with b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa not found: ID does not exist" containerID="b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.784536 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa"} err="failed to get container status \"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa\": rpc error: code = NotFound desc = could not find container \"b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa\": container with ID starting with b2b8cd840a45ba094322628f04c4f561ab625b79c1108d05cbed88532a4cf2aa not found: ID does not exist" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.784551 4698 scope.go:117] "RemoveContainer" containerID="fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.785023 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda\": container with ID starting with fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda not found: ID does not exist" containerID="fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.785055 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda"} err="failed to get container status \"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda\": rpc error: code = NotFound desc = could not find container \"fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda\": container with ID starting with fb6994cb77133b27c8a71f10eb6f4cd470263f59f13bbe133616985cd6234eda not found: ID does not exist" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.785070 4698 scope.go:117] "RemoveContainer" containerID="e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd" Jan 27 14:57:14 crc kubenswrapper[4698]: E0127 14:57:14.785318 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd\": container with ID starting with e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd not found: ID does not exist" containerID="e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.785343 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd"} err="failed to get container status \"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd\": rpc error: code = NotFound desc = could not find container \"e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd\": container with ID starting with e6df9b63745974a70ed0bb853ac72a9239422beda7cf2d6bae433ebab8755dcd not found: ID does not exist" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844286 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-scripts\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-run-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844389 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-log-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844432 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-config-data\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844677 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ms8\" (UniqueName: \"kubernetes.io/projected/c5569e41-49e9-4044-b173-babb897afb4f-kube-api-access-48ms8\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.844991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-run-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.845531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5569e41-49e9-4044-b173-babb897afb4f-log-httpd\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.849293 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.849929 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.850154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-config-data\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.850393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.851085 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5569e41-49e9-4044-b173-babb897afb4f-scripts\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:14 crc kubenswrapper[4698]: I0127 14:57:14.867760 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ms8\" (UniqueName: \"kubernetes.io/projected/c5569e41-49e9-4044-b173-babb897afb4f-kube-api-access-48ms8\") pod \"ceilometer-0\" (UID: \"c5569e41-49e9-4044-b173-babb897afb4f\") " pod="openstack/ceilometer-0" Jan 27 14:57:15 crc kubenswrapper[4698]: I0127 14:57:15.007909 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505ff24d-a299-4255-8d5a-9b52ff443b07" path="/var/lib/kubelet/pods/505ff24d-a299-4255-8d5a-9b52ff443b07/volumes" Jan 27 14:57:15 crc kubenswrapper[4698]: I0127 14:57:15.020965 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:57:15 crc kubenswrapper[4698]: I0127 14:57:15.341185 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:57:15 crc kubenswrapper[4698]: I0127 14:57:15.630056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5569e41-49e9-4044-b173-babb897afb4f","Type":"ContainerStarted","Data":"72da7efb9cc9517f5eb1e2228ee84d88aeb7813d133f028f79a6ffd449f05219"} Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.147790 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84f9c77fd5-xrjct" Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.202583 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.204335 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="dnsmasq-dns" containerID="cri-o://86f8e7fd22bc0a0612a07b967de90fd27a6e9a41d24abe9560bffc163595836f" gracePeriod=10 Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.642655 4698 generic.go:334] "Generic (PLEG): container finished" podID="adbb58db-7258-48ea-8409-384677c7c42e" containerID="86f8e7fd22bc0a0612a07b967de90fd27a6e9a41d24abe9560bffc163595836f" exitCode=0 Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.642983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" event={"ID":"adbb58db-7258-48ea-8409-384677c7c42e","Type":"ContainerDied","Data":"86f8e7fd22bc0a0612a07b967de90fd27a6e9a41d24abe9560bffc163595836f"} Jan 27 14:57:16 crc kubenswrapper[4698]: I0127 14:57:16.644878 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5569e41-49e9-4044-b173-babb897afb4f","Type":"ContainerStarted","Data":"106b3b3fa302cc45d799ba531d05098a5a3be8d644502be534a8a22e4e5fcc51"} Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.800910 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911536 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911781 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911806 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911888 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbgc5\" (UniqueName: \"kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.911981 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config\") pod \"adbb58db-7258-48ea-8409-384677c7c42e\" (UID: \"adbb58db-7258-48ea-8409-384677c7c42e\") " Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.930779 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5" (OuterVolumeSpecName: "kube-api-access-xbgc5") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "kube-api-access-xbgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.973953 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.975423 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.977815 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.983996 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:57:17 crc kubenswrapper[4698]: I0127 14:57:17.989426 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config" (OuterVolumeSpecName: "config") pod "adbb58db-7258-48ea-8409-384677c7c42e" (UID: "adbb58db-7258-48ea-8409-384677c7c42e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014581 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014623 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014659 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbgc5\" (UniqueName: \"kubernetes.io/projected/adbb58db-7258-48ea-8409-384677c7c42e-kube-api-access-xbgc5\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014675 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014687 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.014697 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adbb58db-7258-48ea-8409-384677c7c42e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.672150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" event={"ID":"adbb58db-7258-48ea-8409-384677c7c42e","Type":"ContainerDied","Data":"a568372404f9d6390d39b25e14b0b97d824a39e4c9dd1c9e63f295860de22397"} Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.672210 4698 scope.go:117] "RemoveContainer" containerID="86f8e7fd22bc0a0612a07b967de90fd27a6e9a41d24abe9560bffc163595836f" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.672362 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86575cfcc5-vsn49" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.688339 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5569e41-49e9-4044-b173-babb897afb4f","Type":"ContainerStarted","Data":"2dfd4df230f90ff5b3365d3eff6190c2071b3f3da46e8069748f2da0e80a4d22"} Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.697774 4698 scope.go:117] "RemoveContainer" containerID="46a275f11f1cbff8c0f7c45763aa01c1d9ace6a39108bbebfecd49efd89e1763" Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.727676 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:57:18 crc kubenswrapper[4698]: I0127 14:57:18.742318 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86575cfcc5-vsn49"] Jan 27 14:57:19 crc kubenswrapper[4698]: I0127 14:57:19.006707 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbb58db-7258-48ea-8409-384677c7c42e" path="/var/lib/kubelet/pods/adbb58db-7258-48ea-8409-384677c7c42e/volumes" Jan 27 14:57:19 crc kubenswrapper[4698]: I0127 14:57:19.700866 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02bd3e6-3943-4d72-a596-ad3b1ca55805" containerID="12c7a9f705d12e6fb43fab18d38df89dc45d0ea718b4bd098e536c0b5407f07e" exitCode=0 Jan 27 14:57:19 crc kubenswrapper[4698]: I0127 14:57:19.700915 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf2ls" event={"ID":"d02bd3e6-3943-4d72-a596-ad3b1ca55805","Type":"ContainerDied","Data":"12c7a9f705d12e6fb43fab18d38df89dc45d0ea718b4bd098e536c0b5407f07e"} Jan 27 14:57:19 crc kubenswrapper[4698]: I0127 14:57:19.704594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5569e41-49e9-4044-b173-babb897afb4f","Type":"ContainerStarted","Data":"8995e44f8dc3b76b87e1f9c508918295ca1ac59497875429288698a495782773"} Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.092141 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.099074 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle\") pod \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.099162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts\") pod \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.099309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7vw\" (UniqueName: \"kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw\") pod \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.099389 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data\") pod \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\" (UID: \"d02bd3e6-3943-4d72-a596-ad3b1ca55805\") " Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.105072 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw" (OuterVolumeSpecName: "kube-api-access-pm7vw") pod "d02bd3e6-3943-4d72-a596-ad3b1ca55805" (UID: "d02bd3e6-3943-4d72-a596-ad3b1ca55805"). InnerVolumeSpecName "kube-api-access-pm7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.105432 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts" (OuterVolumeSpecName: "scripts") pod "d02bd3e6-3943-4d72-a596-ad3b1ca55805" (UID: "d02bd3e6-3943-4d72-a596-ad3b1ca55805"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.135495 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data" (OuterVolumeSpecName: "config-data") pod "d02bd3e6-3943-4d72-a596-ad3b1ca55805" (UID: "d02bd3e6-3943-4d72-a596-ad3b1ca55805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.138904 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d02bd3e6-3943-4d72-a596-ad3b1ca55805" (UID: "d02bd3e6-3943-4d72-a596-ad3b1ca55805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.201255 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.201290 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.201299 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7vw\" (UniqueName: \"kubernetes.io/projected/d02bd3e6-3943-4d72-a596-ad3b1ca55805-kube-api-access-pm7vw\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.201308 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02bd3e6-3943-4d72-a596-ad3b1ca55805-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.735030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf2ls" event={"ID":"d02bd3e6-3943-4d72-a596-ad3b1ca55805","Type":"ContainerDied","Data":"3d42fc96f0ffcce975bcf45845bc0f289cce0537975d3f334e72d3518c4286a3"} Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.735391 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d42fc96f0ffcce975bcf45845bc0f289cce0537975d3f334e72d3518c4286a3" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.735083 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf2ls" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.739018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5569e41-49e9-4044-b173-babb897afb4f","Type":"ContainerStarted","Data":"1ee2dafcbce260bb089b0e0f5d1bc996f6d3ed67920867e695697850c26f1def"} Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.739682 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.788377 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.635548199 podStartE2EDuration="7.788348892s" podCreationTimestamp="2026-01-27 14:57:14 +0000 UTC" firstStartedPulling="2026-01-27 14:57:15.343344214 +0000 UTC m=+1691.020121679" lastFinishedPulling="2026-01-27 14:57:21.496144907 +0000 UTC m=+1697.172922372" observedRunningTime="2026-01-27 14:57:21.77614428 +0000 UTC m=+1697.452921745" watchObservedRunningTime="2026-01-27 14:57:21.788348892 +0000 UTC m=+1697.465126357" Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.923156 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.923456 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-log" containerID="cri-o://8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" gracePeriod=30 Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.924063 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-api" containerID="cri-o://50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" gracePeriod=30 Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.969919 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.970222 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="20051e2d-581c-4d5b-8259-972c12bef429" containerName="nova-scheduler-scheduler" containerID="cri-o://5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" gracePeriod=30 Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.993749 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.994025 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-log" containerID="cri-o://268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068" gracePeriod=30 Jan 27 14:57:21 crc kubenswrapper[4698]: I0127 14:57:21.994580 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-metadata" containerID="cri-o://bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6" gracePeriod=30 Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.508802 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.527885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.527986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.528045 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.528064 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.528210 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z56cs\" (UniqueName: \"kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.528322 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs\") pod \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\" (UID: \"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9\") " Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.529195 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs" (OuterVolumeSpecName: "logs") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.534942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs" (OuterVolumeSpecName: "kube-api-access-z56cs") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "kube-api-access-z56cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.563743 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data" (OuterVolumeSpecName: "config-data") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.581365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.606584 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631436 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631479 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631491 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631500 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631508 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z56cs\" (UniqueName: \"kubernetes.io/projected/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-kube-api-access-z56cs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.631626 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" (UID: "75dd98fc-f83f-479b-a4c9-7b6ac186b1c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.732359 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752543 4698 generic.go:334] "Generic (PLEG): container finished" podID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerID="50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" exitCode=0 Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752589 4698 generic.go:334] "Generic (PLEG): container finished" podID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerID="8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" exitCode=143 Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerDied","Data":"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58"} Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerDied","Data":"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49"} Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75dd98fc-f83f-479b-a4c9-7b6ac186b1c9","Type":"ContainerDied","Data":"eea2bd491317d80c88973a251efff6b23aef9eafd8db771d453a260187e78330"} Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752729 4698 scope.go:117] "RemoveContainer" containerID="50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.752893 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.762090 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerID="268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068" exitCode=143 Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.763709 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerDied","Data":"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068"} Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.801752 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.805920 4698 scope.go:117] "RemoveContainer" containerID="8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.823187 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.840604 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.841110 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-log" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841134 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-log" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.841159 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02bd3e6-3943-4d72-a596-ad3b1ca55805" containerName="nova-manage" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841169 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02bd3e6-3943-4d72-a596-ad3b1ca55805" containerName="nova-manage" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.841184 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-api" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841193 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-api" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.841213 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="dnsmasq-dns" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841221 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="dnsmasq-dns" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.841245 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="init" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841253 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="init" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841475 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbb58db-7258-48ea-8409-384677c7c42e" containerName="dnsmasq-dns" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841502 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-log" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841513 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" containerName="nova-api-api" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.841526 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02bd3e6-3943-4d72-a596-ad3b1ca55805" containerName="nova-manage" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.842869 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.844888 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.845866 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.846106 4698 scope.go:117] "RemoveContainer" containerID="50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.846582 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58\": container with ID starting with 50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58 not found: ID does not exist" containerID="50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.846629 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58"} err="failed to get container status \"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58\": rpc error: code = NotFound desc = could not find container \"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58\": container with ID starting with 50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58 not found: ID does not exist" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.846667 4698 scope.go:117] "RemoveContainer" containerID="8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" Jan 27 14:57:22 crc kubenswrapper[4698]: E0127 14:57:22.850988 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49\": container with ID starting with 8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49 not found: ID does not exist" containerID="8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.851043 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49"} err="failed to get container status \"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49\": rpc error: code = NotFound desc = could not find container \"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49\": container with ID starting with 8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49 not found: ID does not exist" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.851075 4698 scope.go:117] "RemoveContainer" containerID="50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.851568 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.861820 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58"} err="failed to get container status \"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58\": rpc error: code = NotFound desc = could not find container \"50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58\": container with ID starting with 50e61800ea1cb3a8b31dd49806aeb67752366e45087fd096d699a32ac73f6c58 not found: ID does not exist" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.861874 4698 scope.go:117] "RemoveContainer" containerID="8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.862531 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49"} err="failed to get container status \"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49\": rpc error: code = NotFound desc = could not find container \"8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49\": container with ID starting with 8989356bda5addfcae48af8eca9b8ed0fa9232a76622893b98953aa8734c5d49 not found: ID does not exist" Jan 27 14:57:22 crc kubenswrapper[4698]: I0127 14:57:22.863146 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.006526 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75dd98fc-f83f-479b-a4c9-7b6ac186b1c9" path="/var/lib/kubelet/pods/75dd98fc-f83f-479b-a4c9-7b6ac186b1c9/volumes" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.039613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b758775a-939b-4630-9737-476f4ff9073d-logs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.039693 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkl2m\" (UniqueName: \"kubernetes.io/projected/b758775a-939b-4630-9737-476f4ff9073d-kube-api-access-wkl2m\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.042752 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.042856 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.042877 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.042942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-config-data\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.145751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.145862 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.145887 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.145929 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-config-data\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.145970 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b758775a-939b-4630-9737-476f4ff9073d-logs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.146025 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkl2m\" (UniqueName: \"kubernetes.io/projected/b758775a-939b-4630-9737-476f4ff9073d-kube-api-access-wkl2m\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.148530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b758775a-939b-4630-9737-476f4ff9073d-logs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.150852 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.151403 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-config-data\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.153204 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.153235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b758775a-939b-4630-9737-476f4ff9073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.169155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkl2m\" (UniqueName: \"kubernetes.io/projected/b758775a-939b-4630-9737-476f4ff9073d-kube-api-access-wkl2m\") pod \"nova-api-0\" (UID: \"b758775a-939b-4630-9737-476f4ff9073d\") " pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.191413 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.330484 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.453057 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data\") pod \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.453477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8j84\" (UniqueName: \"kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84\") pod \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.453629 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle\") pod \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.453693 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs\") pod \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.453739 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs\") pod \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\" (UID: \"d9c40e5a-9573-450e-978a-1068bfe7f3a9\") " Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.455325 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs" (OuterVolumeSpecName: "logs") pod "d9c40e5a-9573-450e-978a-1068bfe7f3a9" (UID: "d9c40e5a-9573-450e-978a-1068bfe7f3a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.464037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84" (OuterVolumeSpecName: "kube-api-access-b8j84") pod "d9c40e5a-9573-450e-978a-1068bfe7f3a9" (UID: "d9c40e5a-9573-450e-978a-1068bfe7f3a9"). InnerVolumeSpecName "kube-api-access-b8j84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.521813 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9c40e5a-9573-450e-978a-1068bfe7f3a9" (UID: "d9c40e5a-9573-450e-978a-1068bfe7f3a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.549751 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.551168 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.556475 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8j84\" (UniqueName: \"kubernetes.io/projected/d9c40e5a-9573-450e-978a-1068bfe7f3a9-kube-api-access-b8j84\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.556500 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.556509 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c40e5a-9573-450e-978a-1068bfe7f3a9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.575834 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.575919 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="20051e2d-581c-4d5b-8259-972c12bef429" containerName="nova-scheduler-scheduler" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.586427 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data" (OuterVolumeSpecName: "config-data") pod "d9c40e5a-9573-450e-978a-1068bfe7f3a9" (UID: "d9c40e5a-9573-450e-978a-1068bfe7f3a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.588799 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d9c40e5a-9573-450e-978a-1068bfe7f3a9" (UID: "d9c40e5a-9573-450e-978a-1068bfe7f3a9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.659016 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.659114 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c40e5a-9573-450e-978a-1068bfe7f3a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.805597 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerID="bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6" exitCode=0 Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.805684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerDied","Data":"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6"} Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.805719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c40e5a-9573-450e-978a-1068bfe7f3a9","Type":"ContainerDied","Data":"3155700640b991b5ee3b5dfb97145f869ff02b4f1d95970ce1ed18569637fd8f"} Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.805763 4698 scope.go:117] "RemoveContainer" containerID="bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.805764 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.812972 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:57:23 crc kubenswrapper[4698]: W0127 14:57:23.819478 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb758775a_939b_4630_9737_476f4ff9073d.slice/crio-ef3a2c204fbd003024d1966defc52339f79ed9fe9e8354b5e68a740097290925 WatchSource:0}: Error finding container ef3a2c204fbd003024d1966defc52339f79ed9fe9e8354b5e68a740097290925: Status 404 returned error can't find the container with id ef3a2c204fbd003024d1966defc52339f79ed9fe9e8354b5e68a740097290925 Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.844511 4698 scope.go:117] "RemoveContainer" containerID="268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.851766 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.873388 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.886893 4698 scope.go:117] "RemoveContainer" containerID="bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6" Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.887463 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6\": container with ID starting with bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6 not found: ID does not exist" containerID="bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.887528 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6"} err="failed to get container status \"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6\": rpc error: code = NotFound desc = could not find container \"bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6\": container with ID starting with bed8c95ffce8b4379b9e0a947455d0862c7dfb7d954e33730e4555f1a6da9ea6 not found: ID does not exist" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.887557 4698 scope.go:117] "RemoveContainer" containerID="268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068" Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.887850 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068\": container with ID starting with 268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068 not found: ID does not exist" containerID="268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.887886 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068"} err="failed to get container status \"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068\": rpc error: code = NotFound desc = could not find container \"268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068\": container with ID starting with 268004e7209930768b219c8bb420cd1a99618d53389a961dd7beaaedc096a068 not found: ID does not exist" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.893699 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.894236 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-log" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.894249 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-log" Jan 27 14:57:23 crc kubenswrapper[4698]: E0127 14:57:23.894280 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-metadata" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.894286 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-metadata" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.894492 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-log" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.894507 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" containerName="nova-metadata-metadata" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.895898 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.901207 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.901548 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:57:23 crc kubenswrapper[4698]: I0127 14:57:23.910285 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.066682 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-logs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.067123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5qk\" (UniqueName: \"kubernetes.io/projected/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-kube-api-access-cr5qk\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.067174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-config-data\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.067330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.067384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.169712 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.169767 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.169955 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-logs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.169980 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr5qk\" (UniqueName: \"kubernetes.io/projected/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-kube-api-access-cr5qk\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.170019 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-config-data\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.170414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-logs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.174748 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.175200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-config-data\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.176532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.187296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr5qk\" (UniqueName: \"kubernetes.io/projected/d4c24ac0-f402-431e-ba0f-677ca5b9f97a-kube-api-access-cr5qk\") pod \"nova-metadata-0\" (UID: \"d4c24ac0-f402-431e-ba0f-677ca5b9f97a\") " pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.230889 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:57:24 crc kubenswrapper[4698]: W0127 14:57:24.731052 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c24ac0_f402_431e_ba0f_677ca5b9f97a.slice/crio-21887ad81b41588bdc9aba806ba5191a31364d75216852242ec261f98f602969 WatchSource:0}: Error finding container 21887ad81b41588bdc9aba806ba5191a31364d75216852242ec261f98f602969: Status 404 returned error can't find the container with id 21887ad81b41588bdc9aba806ba5191a31364d75216852242ec261f98f602969 Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.731916 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.818990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4c24ac0-f402-431e-ba0f-677ca5b9f97a","Type":"ContainerStarted","Data":"21887ad81b41588bdc9aba806ba5191a31364d75216852242ec261f98f602969"} Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.824878 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b758775a-939b-4630-9737-476f4ff9073d","Type":"ContainerStarted","Data":"ed7f1e2c1176568537dee6ce0f0f2a4649720c620160512d4428c548605143c7"} Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.824955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b758775a-939b-4630-9737-476f4ff9073d","Type":"ContainerStarted","Data":"471e85862ba20e5ed24785cc6b8184c0155e7bdf3e5881456d015b764da0f15e"} Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.824970 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b758775a-939b-4630-9737-476f4ff9073d","Type":"ContainerStarted","Data":"ef3a2c204fbd003024d1966defc52339f79ed9fe9e8354b5e68a740097290925"} Jan 27 14:57:24 crc kubenswrapper[4698]: I0127 14:57:24.858825 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.858802142 podStartE2EDuration="2.858802142s" podCreationTimestamp="2026-01-27 14:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:24.844341781 +0000 UTC m=+1700.521119246" watchObservedRunningTime="2026-01-27 14:57:24.858802142 +0000 UTC m=+1700.535579607" Jan 27 14:57:25 crc kubenswrapper[4698]: I0127 14:57:25.003233 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:57:25 crc kubenswrapper[4698]: E0127 14:57:25.004605 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:57:25 crc kubenswrapper[4698]: I0127 14:57:25.017711 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c40e5a-9573-450e-978a-1068bfe7f3a9" path="/var/lib/kubelet/pods/d9c40e5a-9573-450e-978a-1068bfe7f3a9/volumes" Jan 27 14:57:25 crc kubenswrapper[4698]: I0127 14:57:25.880271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4c24ac0-f402-431e-ba0f-677ca5b9f97a","Type":"ContainerStarted","Data":"b83d43ae3ba5dab12f3504cef88301182479777e6237a9212e66aa94a76be52f"} Jan 27 14:57:25 crc kubenswrapper[4698]: I0127 14:57:25.881474 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4c24ac0-f402-431e-ba0f-677ca5b9f97a","Type":"ContainerStarted","Data":"3a6b92fd4d9eab8cb2ce58d61f5046efa4efb8ea6b76b64f49bf793f9f006ffc"} Jan 27 14:57:25 crc kubenswrapper[4698]: I0127 14:57:25.908715 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.908689411 podStartE2EDuration="2.908689411s" podCreationTimestamp="2026-01-27 14:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:25.902449286 +0000 UTC m=+1701.579226761" watchObservedRunningTime="2026-01-27 14:57:25.908689411 +0000 UTC m=+1701.585466886" Jan 27 14:57:26 crc kubenswrapper[4698]: I0127 14:57:26.891162 4698 generic.go:334] "Generic (PLEG): container finished" podID="20051e2d-581c-4d5b-8259-972c12bef429" containerID="5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" exitCode=0 Jan 27 14:57:26 crc kubenswrapper[4698]: I0127 14:57:26.891254 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20051e2d-581c-4d5b-8259-972c12bef429","Type":"ContainerDied","Data":"5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7"} Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.516315 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.653270 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle\") pod \"20051e2d-581c-4d5b-8259-972c12bef429\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.653311 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data\") pod \"20051e2d-581c-4d5b-8259-972c12bef429\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.653495 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbhx\" (UniqueName: \"kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx\") pod \"20051e2d-581c-4d5b-8259-972c12bef429\" (UID: \"20051e2d-581c-4d5b-8259-972c12bef429\") " Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.661006 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx" (OuterVolumeSpecName: "kube-api-access-qlbhx") pod "20051e2d-581c-4d5b-8259-972c12bef429" (UID: "20051e2d-581c-4d5b-8259-972c12bef429"). InnerVolumeSpecName "kube-api-access-qlbhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.686884 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20051e2d-581c-4d5b-8259-972c12bef429" (UID: "20051e2d-581c-4d5b-8259-972c12bef429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.694432 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data" (OuterVolumeSpecName: "config-data") pod "20051e2d-581c-4d5b-8259-972c12bef429" (UID: "20051e2d-581c-4d5b-8259-972c12bef429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.756154 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbhx\" (UniqueName: \"kubernetes.io/projected/20051e2d-581c-4d5b-8259-972c12bef429-kube-api-access-qlbhx\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.756197 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.756211 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20051e2d-581c-4d5b-8259-972c12bef429-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.909612 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20051e2d-581c-4d5b-8259-972c12bef429","Type":"ContainerDied","Data":"730d07e335ce9681571e40b73bdc31eb6e2b0f3c5393200e0c6b99644c23512f"} Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.909734 4698 scope.go:117] "RemoveContainer" containerID="5fb599927590c1307a7e0641da29d53853002dc84bf9c0c8c8f8ef06898f0fb7" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.910900 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.952771 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.970384 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.984894 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:27 crc kubenswrapper[4698]: E0127 14:57:27.985418 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20051e2d-581c-4d5b-8259-972c12bef429" containerName="nova-scheduler-scheduler" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.985440 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="20051e2d-581c-4d5b-8259-972c12bef429" containerName="nova-scheduler-scheduler" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.985693 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="20051e2d-581c-4d5b-8259-972c12bef429" containerName="nova-scheduler-scheduler" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.986419 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:57:27 crc kubenswrapper[4698]: I0127 14:57:27.989124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.006151 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.066257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.066376 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-config-data\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.066441 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qml6x\" (UniqueName: \"kubernetes.io/projected/e1be438a-a626-403f-ac66-55b2a78f44fe-kube-api-access-qml6x\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.169134 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.169294 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-config-data\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.169377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qml6x\" (UniqueName: \"kubernetes.io/projected/e1be438a-a626-403f-ac66-55b2a78f44fe-kube-api-access-qml6x\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.182436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.183334 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1be438a-a626-403f-ac66-55b2a78f44fe-config-data\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.199959 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qml6x\" (UniqueName: \"kubernetes.io/projected/e1be438a-a626-403f-ac66-55b2a78f44fe-kube-api-access-qml6x\") pod \"nova-scheduler-0\" (UID: \"e1be438a-a626-403f-ac66-55b2a78f44fe\") " pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.310239 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.780325 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:57:28 crc kubenswrapper[4698]: W0127 14:57:28.784432 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1be438a_a626_403f_ac66_55b2a78f44fe.slice/crio-4aaeb8f805d9ed18a5ff216ca9e7f70264fd16a0a6a4f406671a06c52b481245 WatchSource:0}: Error finding container 4aaeb8f805d9ed18a5ff216ca9e7f70264fd16a0a6a4f406671a06c52b481245: Status 404 returned error can't find the container with id 4aaeb8f805d9ed18a5ff216ca9e7f70264fd16a0a6a4f406671a06c52b481245 Jan 27 14:57:28 crc kubenswrapper[4698]: I0127 14:57:28.925382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1be438a-a626-403f-ac66-55b2a78f44fe","Type":"ContainerStarted","Data":"4aaeb8f805d9ed18a5ff216ca9e7f70264fd16a0a6a4f406671a06c52b481245"} Jan 27 14:57:29 crc kubenswrapper[4698]: I0127 14:57:29.006278 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20051e2d-581c-4d5b-8259-972c12bef429" path="/var/lib/kubelet/pods/20051e2d-581c-4d5b-8259-972c12bef429/volumes" Jan 27 14:57:29 crc kubenswrapper[4698]: I0127 14:57:29.231025 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:57:29 crc kubenswrapper[4698]: I0127 14:57:29.231891 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:57:29 crc kubenswrapper[4698]: I0127 14:57:29.937625 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1be438a-a626-403f-ac66-55b2a78f44fe","Type":"ContainerStarted","Data":"48d1998136b584344118852551ab73e2a6ff72bde37912757a726d0ffd8081bf"} Jan 27 14:57:29 crc kubenswrapper[4698]: I0127 14:57:29.956093 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.956073988 podStartE2EDuration="2.956073988s" podCreationTimestamp="2026-01-27 14:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:57:29.953326586 +0000 UTC m=+1705.630104061" watchObservedRunningTime="2026-01-27 14:57:29.956073988 +0000 UTC m=+1705.632851453" Jan 27 14:57:33 crc kubenswrapper[4698]: I0127 14:57:33.193058 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:57:33 crc kubenswrapper[4698]: I0127 14:57:33.194174 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:57:33 crc kubenswrapper[4698]: I0127 14:57:33.311614 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:57:34 crc kubenswrapper[4698]: I0127 14:57:34.206917 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b758775a-939b-4630-9737-476f4ff9073d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:57:34 crc kubenswrapper[4698]: I0127 14:57:34.206928 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b758775a-939b-4630-9737-476f4ff9073d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:57:34 crc kubenswrapper[4698]: I0127 14:57:34.231473 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:57:34 crc kubenswrapper[4698]: I0127 14:57:34.231516 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:57:35 crc kubenswrapper[4698]: I0127 14:57:35.257851 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d4c24ac0-f402-431e-ba0f-677ca5b9f97a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:57:35 crc kubenswrapper[4698]: I0127 14:57:35.257851 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d4c24ac0-f402-431e-ba0f-677ca5b9f97a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:57:38 crc kubenswrapper[4698]: I0127 14:57:38.310913 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:57:38 crc kubenswrapper[4698]: I0127 14:57:38.344804 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:57:38 crc kubenswrapper[4698]: I0127 14:57:38.993190 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:57:38 crc kubenswrapper[4698]: E0127 14:57:38.993512 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:57:39 crc kubenswrapper[4698]: I0127 14:57:39.070001 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:57:43 crc kubenswrapper[4698]: I0127 14:57:43.203206 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:57:43 crc kubenswrapper[4698]: I0127 14:57:43.204440 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:57:43 crc kubenswrapper[4698]: I0127 14:57:43.207523 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:57:43 crc kubenswrapper[4698]: I0127 14:57:43.215026 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:57:44 crc kubenswrapper[4698]: I0127 14:57:44.105571 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:57:44 crc kubenswrapper[4698]: I0127 14:57:44.115313 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:57:44 crc kubenswrapper[4698]: I0127 14:57:44.242238 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:57:44 crc kubenswrapper[4698]: I0127 14:57:44.243084 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:57:44 crc kubenswrapper[4698]: I0127 14:57:44.250230 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:57:45 crc kubenswrapper[4698]: I0127 14:57:45.030098 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:57:45 crc kubenswrapper[4698]: I0127 14:57:45.131804 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:57:51 crc kubenswrapper[4698]: I0127 14:57:51.992699 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:57:51 crc kubenswrapper[4698]: E0127 14:57:51.993683 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:58:05 crc kubenswrapper[4698]: I0127 14:58:05.001050 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:58:05 crc kubenswrapper[4698]: E0127 14:58:05.001996 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:58:17 crc kubenswrapper[4698]: I0127 14:58:17.991960 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:58:17 crc kubenswrapper[4698]: E0127 14:58:17.992732 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:58:32 crc kubenswrapper[4698]: I0127 14:58:32.993421 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:58:32 crc kubenswrapper[4698]: E0127 14:58:32.994837 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:58:45 crc kubenswrapper[4698]: I0127 14:58:45.010954 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:58:45 crc kubenswrapper[4698]: E0127 14:58:45.012322 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:58:59 crc kubenswrapper[4698]: I0127 14:58:59.993300 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:58:59 crc kubenswrapper[4698]: E0127 14:58:59.994118 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:59:13 crc kubenswrapper[4698]: I0127 14:59:13.992137 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:59:13 crc kubenswrapper[4698]: E0127 14:59:13.993144 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:59:14 crc kubenswrapper[4698]: I0127 14:59:14.546880 4698 scope.go:117] "RemoveContainer" containerID="89350b3dedb5252f10ece7b4309f2e1e45131485a2dac9829d72c744e659f269" Jan 27 14:59:14 crc kubenswrapper[4698]: I0127 14:59:14.574415 4698 scope.go:117] "RemoveContainer" containerID="4cbd31462283703c3ca2ab8011b320af50638594665ca991f17b3cc1e3f582b5" Jan 27 14:59:25 crc kubenswrapper[4698]: I0127 14:59:25.992734 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:59:25 crc kubenswrapper[4698]: E0127 14:59:25.993654 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.316430 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.319542 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.328130 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.428699 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.428809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57qw\" (UniqueName: \"kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.428945 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.530807 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.530993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.531068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57qw\" (UniqueName: \"kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.531423 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.531500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.552565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57qw\" (UniqueName: \"kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw\") pod \"redhat-operators-xqbzp\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:34 crc kubenswrapper[4698]: I0127 14:59:34.699448 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:35 crc kubenswrapper[4698]: I0127 14:59:35.151105 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:35 crc kubenswrapper[4698]: I0127 14:59:35.187662 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerStarted","Data":"e8fb072b85c3a89dc2e81adbea42184e0ebed363360cadb9d8860b351e2296b2"} Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.069518 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-qk9bt"] Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.126738 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-c51b-account-create-update-ddjfr"] Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.138583 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-c51b-account-create-update-ddjfr"] Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.147727 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-qk9bt"] Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.199630 4698 generic.go:334] "Generic (PLEG): container finished" podID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerID="fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad" exitCode=0 Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.199748 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerDied","Data":"fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad"} Jan 27 14:59:36 crc kubenswrapper[4698]: I0127 14:59:36.205002 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:59:37 crc kubenswrapper[4698]: I0127 14:59:37.003885 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2673531b-aee1-4a69-b3bb-255c3e331724" path="/var/lib/kubelet/pods/2673531b-aee1-4a69-b3bb-255c3e331724/volumes" Jan 27 14:59:37 crc kubenswrapper[4698]: I0127 14:59:37.004622 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4433013-b1ca-47c7-9b70-155cb05605a3" path="/var/lib/kubelet/pods/f4433013-b1ca-47c7-9b70-155cb05605a3/volumes" Jan 27 14:59:37 crc kubenswrapper[4698]: I0127 14:59:37.993132 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:59:37 crc kubenswrapper[4698]: E0127 14:59:37.993898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:59:38 crc kubenswrapper[4698]: I0127 14:59:38.220087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerStarted","Data":"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9"} Jan 27 14:59:38 crc kubenswrapper[4698]: E0127 14:59:38.940047 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368cc7a8_ca98_4aaa_a965_d41e5f28d961.slice/crio-conmon-21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368cc7a8_ca98_4aaa_a965_d41e5f28d961.slice/crio-21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:59:39 crc kubenswrapper[4698]: I0127 14:59:39.231052 4698 generic.go:334] "Generic (PLEG): container finished" podID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerID="21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9" exitCode=0 Jan 27 14:59:39 crc kubenswrapper[4698]: I0127 14:59:39.231102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerDied","Data":"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9"} Jan 27 14:59:41 crc kubenswrapper[4698]: I0127 14:59:41.250971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerStarted","Data":"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817"} Jan 27 14:59:41 crc kubenswrapper[4698]: I0127 14:59:41.280013 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqbzp" podStartSLOduration=3.310133147 podStartE2EDuration="7.279991643s" podCreationTimestamp="2026-01-27 14:59:34 +0000 UTC" firstStartedPulling="2026-01-27 14:59:36.201987437 +0000 UTC m=+1831.878764902" lastFinishedPulling="2026-01-27 14:59:40.171845933 +0000 UTC m=+1835.848623398" observedRunningTime="2026-01-27 14:59:41.274010076 +0000 UTC m=+1836.950787561" watchObservedRunningTime="2026-01-27 14:59:41.279991643 +0000 UTC m=+1836.956769108" Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.035517 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9b23-account-create-update-jqrhb"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.050982 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-l7xhq"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.064466 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nmrs6"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.073431 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9b23-account-create-update-jqrhb"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.082567 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-l7xhq"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.091712 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nmrs6"] Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.700588 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:44 crc kubenswrapper[4698]: I0127 14:59:44.700693 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.014901 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7350c55d-9b7b-4bdb-a901-998578b4eea9" path="/var/lib/kubelet/pods/7350c55d-9b7b-4bdb-a901-998578b4eea9/volumes" Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.070206 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d7277e-5ed9-480d-a115-1c3568be25a1" path="/var/lib/kubelet/pods/92d7277e-5ed9-480d-a115-1c3568be25a1/volumes" Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.073298 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58bbce4-4fb7-438d-bc96-1daafb04c867" path="/var/lib/kubelet/pods/e58bbce4-4fb7-438d-bc96-1daafb04c867/volumes" Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.074069 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-25a3-account-create-update-hrqhk"] Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.074105 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-25a3-account-create-update-hrqhk"] Jan 27 14:59:45 crc kubenswrapper[4698]: I0127 14:59:45.794590 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xqbzp" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="registry-server" probeResult="failure" output=< Jan 27 14:59:45 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 14:59:45 crc kubenswrapper[4698]: > Jan 27 14:59:47 crc kubenswrapper[4698]: I0127 14:59:47.004294 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7832af-cd21-4035-ae87-b11268ea2564" path="/var/lib/kubelet/pods/9b7832af-cd21-4035-ae87-b11268ea2564/volumes" Jan 27 14:59:49 crc kubenswrapper[4698]: I0127 14:59:49.992920 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 14:59:49 crc kubenswrapper[4698]: E0127 14:59:49.994838 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 14:59:53 crc kubenswrapper[4698]: I0127 14:59:53.041688 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r8hzm"] Jan 27 14:59:53 crc kubenswrapper[4698]: I0127 14:59:53.059288 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r8hzm"] Jan 27 14:59:54 crc kubenswrapper[4698]: I0127 14:59:54.752244 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:54 crc kubenswrapper[4698]: I0127 14:59:54.803429 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:55 crc kubenswrapper[4698]: I0127 14:59:55.003499 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835737cf-874a-4c2d-9f03-62cd9cd42d23" path="/var/lib/kubelet/pods/835737cf-874a-4c2d-9f03-62cd9cd42d23/volumes" Jan 27 14:59:55 crc kubenswrapper[4698]: I0127 14:59:55.004128 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:56 crc kubenswrapper[4698]: I0127 14:59:56.424074 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqbzp" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="registry-server" containerID="cri-o://4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817" gracePeriod=2 Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.438012 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.442317 4698 generic.go:334] "Generic (PLEG): container finished" podID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerID="4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817" exitCode=0 Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.442448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerDied","Data":"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817"} Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.442729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqbzp" event={"ID":"368cc7a8-ca98-4aaa-a965-d41e5f28d961","Type":"ContainerDied","Data":"e8fb072b85c3a89dc2e81adbea42184e0ebed363360cadb9d8860b351e2296b2"} Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.442767 4698 scope.go:117] "RemoveContainer" containerID="4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.476436 4698 scope.go:117] "RemoveContainer" containerID="21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.504505 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b57qw\" (UniqueName: \"kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw\") pod \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.504681 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content\") pod \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.504708 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities\") pod \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\" (UID: \"368cc7a8-ca98-4aaa-a965-d41e5f28d961\") " Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.506002 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities" (OuterVolumeSpecName: "utilities") pod "368cc7a8-ca98-4aaa-a965-d41e5f28d961" (UID: "368cc7a8-ca98-4aaa-a965-d41e5f28d961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.547737 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw" (OuterVolumeSpecName: "kube-api-access-b57qw") pod "368cc7a8-ca98-4aaa-a965-d41e5f28d961" (UID: "368cc7a8-ca98-4aaa-a965-d41e5f28d961"). InnerVolumeSpecName "kube-api-access-b57qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.561987 4698 scope.go:117] "RemoveContainer" containerID="fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.608046 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.608087 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b57qw\" (UniqueName: \"kubernetes.io/projected/368cc7a8-ca98-4aaa-a965-d41e5f28d961-kube-api-access-b57qw\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.616305 4698 scope.go:117] "RemoveContainer" containerID="4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817" Jan 27 14:59:57 crc kubenswrapper[4698]: E0127 14:59:57.617798 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817\": container with ID starting with 4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817 not found: ID does not exist" containerID="4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.617848 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817"} err="failed to get container status \"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817\": rpc error: code = NotFound desc = could not find container \"4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817\": container with ID starting with 4d669b2e29066d5ced1f6df14d5d551dfe36bebf5d5d7e026cc239e63e741817 not found: ID does not exist" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.617877 4698 scope.go:117] "RemoveContainer" containerID="21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9" Jan 27 14:59:57 crc kubenswrapper[4698]: E0127 14:59:57.618205 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9\": container with ID starting with 21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9 not found: ID does not exist" containerID="21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.618240 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9"} err="failed to get container status \"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9\": rpc error: code = NotFound desc = could not find container \"21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9\": container with ID starting with 21b28e148f53a1b9da31d296a94136f821439f5b9aa8ac56acc45318e3321da9 not found: ID does not exist" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.618262 4698 scope.go:117] "RemoveContainer" containerID="fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad" Jan 27 14:59:57 crc kubenswrapper[4698]: E0127 14:59:57.618468 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad\": container with ID starting with fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad not found: ID does not exist" containerID="fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.618497 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad"} err="failed to get container status \"fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad\": rpc error: code = NotFound desc = could not find container \"fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad\": container with ID starting with fa0ed6a48395d384f796cad93102d50b3509619bb0328dc7f9991b1a5be1f4ad not found: ID does not exist" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.639884 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "368cc7a8-ca98-4aaa-a965-d41e5f28d961" (UID: "368cc7a8-ca98-4aaa-a965-d41e5f28d961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:59:57 crc kubenswrapper[4698]: I0127 14:59:57.710278 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368cc7a8-ca98-4aaa-a965-d41e5f28d961-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:58 crc kubenswrapper[4698]: I0127 14:59:58.454178 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqbzp" Jan 27 14:59:58 crc kubenswrapper[4698]: I0127 14:59:58.495088 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:58 crc kubenswrapper[4698]: I0127 14:59:58.504711 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqbzp"] Jan 27 14:59:59 crc kubenswrapper[4698]: I0127 14:59:59.005027 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" path="/var/lib/kubelet/pods/368cc7a8-ca98-4aaa-a965-d41e5f28d961/volumes" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.155262 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts"] Jan 27 15:00:00 crc kubenswrapper[4698]: E0127 15:00:00.158840 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="extract-content" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.158866 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="extract-content" Jan 27 15:00:00 crc kubenswrapper[4698]: E0127 15:00:00.158884 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="registry-server" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.158892 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="registry-server" Jan 27 15:00:00 crc kubenswrapper[4698]: E0127 15:00:00.158915 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="extract-utilities" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.158924 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="extract-utilities" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.159212 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="368cc7a8-ca98-4aaa-a965-d41e5f28d961" containerName="registry-server" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.160190 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.164550 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.164731 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.181629 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts"] Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.260414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln49g\" (UniqueName: \"kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.260490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.260728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.362837 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.362935 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln49g\" (UniqueName: \"kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.362979 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.364194 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.370001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.380833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln49g\" (UniqueName: \"kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g\") pod \"collect-profiles-29492100-pk6ts\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:00 crc kubenswrapper[4698]: I0127 15:00:00.492965 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:00.971202 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts"] Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:01.517333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" event={"ID":"0bb5bc37-29d1-4af4-afb2-cd803fb9e924","Type":"ContainerStarted","Data":"7a49e9cf65ad15137641b5d7d57c569129fd4cb2131ecfe813df437d88c44a70"} Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:01.517706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" event={"ID":"0bb5bc37-29d1-4af4-afb2-cd803fb9e924","Type":"ContainerStarted","Data":"d83273379fd41600431627ddfa679dd02519b41b84d7f96e85f787123622bfc9"} Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:01.540867 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" podStartSLOduration=1.540830928 podStartE2EDuration="1.540830928s" podCreationTimestamp="2026-01-27 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:00:01.534962254 +0000 UTC m=+1857.211739729" watchObservedRunningTime="2026-01-27 15:00:01.540830928 +0000 UTC m=+1857.217608393" Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:02.532766 4698 generic.go:334] "Generic (PLEG): container finished" podID="0bb5bc37-29d1-4af4-afb2-cd803fb9e924" containerID="7a49e9cf65ad15137641b5d7d57c569129fd4cb2131ecfe813df437d88c44a70" exitCode=0 Jan 27 15:00:02 crc kubenswrapper[4698]: I0127 15:00:02.532895 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" event={"ID":"0bb5bc37-29d1-4af4-afb2-cd803fb9e924","Type":"ContainerDied","Data":"7a49e9cf65ad15137641b5d7d57c569129fd4cb2131ecfe813df437d88c44a70"} Jan 27 15:00:03 crc kubenswrapper[4698]: I0127 15:00:03.986585 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:03 crc kubenswrapper[4698]: I0127 15:00:03.992809 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:00:03 crc kubenswrapper[4698]: E0127 15:00:03.993305 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.138238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume\") pod \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.138477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume\") pod \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.138577 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln49g\" (UniqueName: \"kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g\") pod \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\" (UID: \"0bb5bc37-29d1-4af4-afb2-cd803fb9e924\") " Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.139418 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bb5bc37-29d1-4af4-afb2-cd803fb9e924" (UID: "0bb5bc37-29d1-4af4-afb2-cd803fb9e924"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.145161 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bb5bc37-29d1-4af4-afb2-cd803fb9e924" (UID: "0bb5bc37-29d1-4af4-afb2-cd803fb9e924"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.158174 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g" (OuterVolumeSpecName: "kube-api-access-ln49g") pod "0bb5bc37-29d1-4af4-afb2-cd803fb9e924" (UID: "0bb5bc37-29d1-4af4-afb2-cd803fb9e924"). InnerVolumeSpecName "kube-api-access-ln49g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.241557 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.241628 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln49g\" (UniqueName: \"kubernetes.io/projected/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-kube-api-access-ln49g\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.241656 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb5bc37-29d1-4af4-afb2-cd803fb9e924-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.555230 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" event={"ID":"0bb5bc37-29d1-4af4-afb2-cd803fb9e924","Type":"ContainerDied","Data":"d83273379fd41600431627ddfa679dd02519b41b84d7f96e85f787123622bfc9"} Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.555270 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts" Jan 27 15:00:04 crc kubenswrapper[4698]: I0127 15:00:04.555283 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83273379fd41600431627ddfa679dd02519b41b84d7f96e85f787123622bfc9" Jan 27 15:00:14 crc kubenswrapper[4698]: I0127 15:00:14.657544 4698 scope.go:117] "RemoveContainer" containerID="1f0e8ef153b5b161a368fa003258538bce40cc2b7013bb9505aeeffcb9ed9b41" Jan 27 15:00:14 crc kubenswrapper[4698]: I0127 15:00:14.770967 4698 scope.go:117] "RemoveContainer" containerID="d93332ca3041888e3eb6628b14ee06875369af7adb0c6305757a955d3deeaf6d" Jan 27 15:00:15 crc kubenswrapper[4698]: I0127 15:00:15.029404 4698 scope.go:117] "RemoveContainer" containerID="d35e5d128cdca7e773308680b82dae70256a03a4cc3b640007ac1162c0a6ea89" Jan 27 15:00:15 crc kubenswrapper[4698]: I0127 15:00:15.105083 4698 scope.go:117] "RemoveContainer" containerID="088e635e53119e47c2552b5b864918508f7d9725eed1c99cc7e8ec28ed1ac78a" Jan 27 15:00:15 crc kubenswrapper[4698]: I0127 15:00:15.126971 4698 scope.go:117] "RemoveContainer" containerID="8567306808b79cf9391cb3a91d67cdc3d03165cc15c094566351ad57bea18fe5" Jan 27 15:00:15 crc kubenswrapper[4698]: I0127 15:00:15.187459 4698 scope.go:117] "RemoveContainer" containerID="57de376067bd1c4d86bb352eade6fd5cdcb68597ace43c9e4c16e8207992a1c7" Jan 27 15:00:15 crc kubenswrapper[4698]: I0127 15:00:15.236693 4698 scope.go:117] "RemoveContainer" containerID="3a9f2419b329d1001425ea0a9edca3b704abdd02718cf6adaacbbf7e45a43e28" Jan 27 15:00:18 crc kubenswrapper[4698]: I0127 15:00:18.993184 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:00:18 crc kubenswrapper[4698]: E0127 15:00:18.993492 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:00:22 crc kubenswrapper[4698]: I0127 15:00:22.033952 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f2vqn"] Jan 27 15:00:22 crc kubenswrapper[4698]: I0127 15:00:22.052955 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zw54k"] Jan 27 15:00:22 crc kubenswrapper[4698]: I0127 15:00:22.062845 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f2vqn"] Jan 27 15:00:22 crc kubenswrapper[4698]: I0127 15:00:22.071711 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zw54k"] Jan 27 15:00:23 crc kubenswrapper[4698]: I0127 15:00:23.004144 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b5af26-92b2-461a-ad12-15c050aae00e" path="/var/lib/kubelet/pods/00b5af26-92b2-461a-ad12-15c050aae00e/volumes" Jan 27 15:00:23 crc kubenswrapper[4698]: I0127 15:00:23.004899 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac257c8-4aeb-4926-99c2-52ea6d3093f6" path="/var/lib/kubelet/pods/1ac257c8-4aeb-4926-99c2-52ea6d3093f6/volumes" Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.069796 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nnmcr"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.089436 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nnmcr"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.112699 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zzlx8"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.144690 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zzlx8"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.163751 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ba53-account-create-update-k5pcr"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.177706 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5711-account-create-update-th2wp"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.197694 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5711-account-create-update-th2wp"] Jan 27 15:00:27 crc kubenswrapper[4698]: I0127 15:00:27.210437 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ba53-account-create-update-k5pcr"] Jan 27 15:00:28 crc kubenswrapper[4698]: I0127 15:00:28.033365 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-98a8-account-create-update-5nl5q"] Jan 27 15:00:28 crc kubenswrapper[4698]: I0127 15:00:28.042890 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-98a8-account-create-update-5nl5q"] Jan 27 15:00:29 crc kubenswrapper[4698]: I0127 15:00:29.006022 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7396dcad-4ef6-441e-bd4d-f04201b73baf" path="/var/lib/kubelet/pods/7396dcad-4ef6-441e-bd4d-f04201b73baf/volumes" Jan 27 15:00:29 crc kubenswrapper[4698]: I0127 15:00:29.006974 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="745bc9c9-c169-47c4-90aa-935671bc12f2" path="/var/lib/kubelet/pods/745bc9c9-c169-47c4-90aa-935671bc12f2/volumes" Jan 27 15:00:29 crc kubenswrapper[4698]: I0127 15:00:29.007754 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7685ed13-4e06-4052-a2e4-310e64a49a53" path="/var/lib/kubelet/pods/7685ed13-4e06-4052-a2e4-310e64a49a53/volumes" Jan 27 15:00:29 crc kubenswrapper[4698]: I0127 15:00:29.008516 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8086d7c-021f-4bb7-892c-50f8b75d56a1" path="/var/lib/kubelet/pods/b8086d7c-021f-4bb7-892c-50f8b75d56a1/volumes" Jan 27 15:00:29 crc kubenswrapper[4698]: I0127 15:00:29.010257 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d082b51b-36d1-4c62-ad12-024337e68479" path="/var/lib/kubelet/pods/d082b51b-36d1-4c62-ad12-024337e68479/volumes" Jan 27 15:00:30 crc kubenswrapper[4698]: I0127 15:00:30.993047 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:00:30 crc kubenswrapper[4698]: E0127 15:00:30.993980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:00:37 crc kubenswrapper[4698]: I0127 15:00:37.034892 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f802-account-create-update-89d8r"] Jan 27 15:00:37 crc kubenswrapper[4698]: I0127 15:00:37.045677 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f802-account-create-update-89d8r"] Jan 27 15:00:39 crc kubenswrapper[4698]: I0127 15:00:39.004746 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5234a1-c705-4a80-8992-05e2ce515ff6" path="/var/lib/kubelet/pods/cf5234a1-c705-4a80-8992-05e2ce515ff6/volumes" Jan 27 15:00:45 crc kubenswrapper[4698]: I0127 15:00:45.992550 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:00:45 crc kubenswrapper[4698]: E0127 15:00:45.993998 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.166380 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492101-f6x5f"] Jan 27 15:01:00 crc kubenswrapper[4698]: E0127 15:01:00.167493 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb5bc37-29d1-4af4-afb2-cd803fb9e924" containerName="collect-profiles" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.167512 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb5bc37-29d1-4af4-afb2-cd803fb9e924" containerName="collect-profiles" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.167826 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb5bc37-29d1-4af4-afb2-cd803fb9e924" containerName="collect-profiles" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.169102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.181182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-f6x5f"] Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.295346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.295422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.295472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlhc\" (UniqueName: \"kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.295677 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.397831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.398154 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.398323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.398422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlhc\" (UniqueName: \"kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.407022 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.407384 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.411276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.420272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlhc\" (UniqueName: \"kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc\") pod \"keystone-cron-29492101-f6x5f\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.500933 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:00 crc kubenswrapper[4698]: I0127 15:01:00.993295 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:01:01 crc kubenswrapper[4698]: I0127 15:01:01.030181 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-f6x5f"] Jan 27 15:01:01 crc kubenswrapper[4698]: I0127 15:01:01.133555 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-f6x5f" event={"ID":"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15","Type":"ContainerStarted","Data":"15b1126214ab2227318933bde03357ceff86e9d5435000f298eaa6646e3edf3d"} Jan 27 15:01:02 crc kubenswrapper[4698]: I0127 15:01:02.149439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-f6x5f" event={"ID":"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15","Type":"ContainerStarted","Data":"9d1feb2d5fcedc74062471ce66784e2a5081566f457182a47ec4461947975bfb"} Jan 27 15:01:02 crc kubenswrapper[4698]: I0127 15:01:02.157495 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02"} Jan 27 15:01:02 crc kubenswrapper[4698]: I0127 15:01:02.197848 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492101-f6x5f" podStartSLOduration=2.197820988 podStartE2EDuration="2.197820988s" podCreationTimestamp="2026-01-27 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:01:02.183097521 +0000 UTC m=+1917.859875006" watchObservedRunningTime="2026-01-27 15:01:02.197820988 +0000 UTC m=+1917.874598463" Jan 27 15:01:05 crc kubenswrapper[4698]: I0127 15:01:05.187614 4698 generic.go:334] "Generic (PLEG): container finished" podID="0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" containerID="9d1feb2d5fcedc74062471ce66784e2a5081566f457182a47ec4461947975bfb" exitCode=0 Jan 27 15:01:05 crc kubenswrapper[4698]: I0127 15:01:05.187751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-f6x5f" event={"ID":"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15","Type":"ContainerDied","Data":"9d1feb2d5fcedc74062471ce66784e2a5081566f457182a47ec4461947975bfb"} Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.581959 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.672458 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data\") pod \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.672526 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlhc\" (UniqueName: \"kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc\") pod \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.672755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys\") pod \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.672882 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle\") pod \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\" (UID: \"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15\") " Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.679128 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc" (OuterVolumeSpecName: "kube-api-access-pvlhc") pod "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" (UID: "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15"). InnerVolumeSpecName "kube-api-access-pvlhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.679283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" (UID: "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.703294 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" (UID: "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.735909 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data" (OuterVolumeSpecName: "config-data") pod "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" (UID: "0034b4e9-4bc5-48cb-8fcc-f98858f0fe15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.775462 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.775512 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.775525 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:06 crc kubenswrapper[4698]: I0127 15:01:06.775535 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlhc\" (UniqueName: \"kubernetes.io/projected/0034b4e9-4bc5-48cb-8fcc-f98858f0fe15-kube-api-access-pvlhc\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:07 crc kubenswrapper[4698]: I0127 15:01:07.211846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-f6x5f" event={"ID":"0034b4e9-4bc5-48cb-8fcc-f98858f0fe15","Type":"ContainerDied","Data":"15b1126214ab2227318933bde03357ceff86e9d5435000f298eaa6646e3edf3d"} Jan 27 15:01:07 crc kubenswrapper[4698]: I0127 15:01:07.212417 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15b1126214ab2227318933bde03357ceff86e9d5435000f298eaa6646e3edf3d" Jan 27 15:01:07 crc kubenswrapper[4698]: I0127 15:01:07.211937 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-f6x5f" Jan 27 15:01:09 crc kubenswrapper[4698]: I0127 15:01:09.070016 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9xlkv"] Jan 27 15:01:09 crc kubenswrapper[4698]: I0127 15:01:09.086055 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-gdttx"] Jan 27 15:01:09 crc kubenswrapper[4698]: I0127 15:01:09.097957 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9xlkv"] Jan 27 15:01:09 crc kubenswrapper[4698]: I0127 15:01:09.108458 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-gdttx"] Jan 27 15:01:11 crc kubenswrapper[4698]: I0127 15:01:11.006837 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa2454d-726a-4585-950a-336d57316b69" path="/var/lib/kubelet/pods/4fa2454d-726a-4585-950a-336d57316b69/volumes" Jan 27 15:01:11 crc kubenswrapper[4698]: I0127 15:01:11.008301 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b86bc8-7f21-4d28-a94d-56ec54d13cb5" path="/var/lib/kubelet/pods/d5b86bc8-7f21-4d28-a94d-56ec54d13cb5/volumes" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.424876 4698 scope.go:117] "RemoveContainer" containerID="f0993185e4733528b46053fd3aec6cf75218885544c1503dcb59755d086be71c" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.456117 4698 scope.go:117] "RemoveContainer" containerID="d8f9cbeda0f6b5608059748e28d347736e53d9b5f18b6db6db9cfaeff43441e9" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.497720 4698 scope.go:117] "RemoveContainer" containerID="b54878f3ebf22d757737dff657bec9044e8f4ca80e203a84ba7cf9857a970d5e" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.547307 4698 scope.go:117] "RemoveContainer" containerID="c40e7ffe7d02b89f1cb9b4b3cb981c379719f73d33f5733cfd8bf8440fd3e88c" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.594539 4698 scope.go:117] "RemoveContainer" containerID="62cfa17a57cb4f2cc837346fc63009220328873da5795e9c32d0c6c18f79942c" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.620652 4698 scope.go:117] "RemoveContainer" containerID="0f8af07917b9b7cbb1731913ec10910b586814e8eb6b8c6ca216a2a0a625b4d0" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.673933 4698 scope.go:117] "RemoveContainer" containerID="8bb4e0815d0ac4a096bc17fef2ba3050a1185b2b2599b4ae4defbea15992e34b" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.696823 4698 scope.go:117] "RemoveContainer" containerID="9373c997f1b4ae436a2d2b8841f64218301be610c6cbf6e3b09e6dacba0a758f" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.746279 4698 scope.go:117] "RemoveContainer" containerID="7422e81a51ede7677b83e0c9fcb0181870c097b8526d9aba0e1d876ef6dc7e05" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.770390 4698 scope.go:117] "RemoveContainer" containerID="e55d3639a211f75e698d7b97ceebb2a57f072d956855cacea9fcc44682cc161f" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.801321 4698 scope.go:117] "RemoveContainer" containerID="265727278fbf945de722537b1404e08bbc0b9305440983e2cd7d930476a4c6f7" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.824297 4698 scope.go:117] "RemoveContainer" containerID="bb921dbd95bef7095b6e74171b60ea601d8550846f0e8b3b8d0a354e01678adc" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.847008 4698 scope.go:117] "RemoveContainer" containerID="d135360ba59b9db02a9b725bd9d1375367a9fdde4a506a22f41c92715a90eea9" Jan 27 15:01:15 crc kubenswrapper[4698]: I0127 15:01:15.871513 4698 scope.go:117] "RemoveContainer" containerID="95ad9e08131819ba2bed237d8545041ac5a605b3e245659c2f0fa015b2d18f0e" Jan 27 15:01:57 crc kubenswrapper[4698]: I0127 15:01:57.051536 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6z2gn"] Jan 27 15:01:57 crc kubenswrapper[4698]: I0127 15:01:57.070534 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6z2gn"] Jan 27 15:01:59 crc kubenswrapper[4698]: I0127 15:01:59.003569 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b202d484-189a-4722-93b1-f72348e74aa4" path="/var/lib/kubelet/pods/b202d484-189a-4722-93b1-f72348e74aa4/volumes" Jan 27 15:02:16 crc kubenswrapper[4698]: I0127 15:02:16.096719 4698 scope.go:117] "RemoveContainer" containerID="ff17803b8805dd7d8fe5951bb07bcb464516773fae7756b0f504bda3b2b5f3b0" Jan 27 15:02:24 crc kubenswrapper[4698]: I0127 15:02:24.051110 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pr5rl"] Jan 27 15:02:24 crc kubenswrapper[4698]: I0127 15:02:24.060877 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pr5rl"] Jan 27 15:02:25 crc kubenswrapper[4698]: I0127 15:02:25.003800 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fcd6ee1-8d2c-490d-8fd4-b582c497f336" path="/var/lib/kubelet/pods/2fcd6ee1-8d2c-490d-8fd4-b582c497f336/volumes" Jan 27 15:02:48 crc kubenswrapper[4698]: I0127 15:02:48.030401 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s4fks"] Jan 27 15:02:48 crc kubenswrapper[4698]: I0127 15:02:48.040167 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s4fks"] Jan 27 15:02:49 crc kubenswrapper[4698]: I0127 15:02:49.004156 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e01edb9-1cd8-4c9a-a602-d35ff30d64fe" path="/var/lib/kubelet/pods/4e01edb9-1cd8-4c9a-a602-d35ff30d64fe/volumes" Jan 27 15:02:56 crc kubenswrapper[4698]: I0127 15:02:56.053705 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9jhwb"] Jan 27 15:02:56 crc kubenswrapper[4698]: I0127 15:02:56.063468 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9jhwb"] Jan 27 15:02:57 crc kubenswrapper[4698]: I0127 15:02:57.005041 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ba2ef6-17ab-4974-a2c6-7f995343e24b" path="/var/lib/kubelet/pods/51ba2ef6-17ab-4974-a2c6-7f995343e24b/volumes" Jan 27 15:03:06 crc kubenswrapper[4698]: I0127 15:03:06.034983 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mcnmn"] Jan 27 15:03:06 crc kubenswrapper[4698]: I0127 15:03:06.044562 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mcnmn"] Jan 27 15:03:07 crc kubenswrapper[4698]: I0127 15:03:07.232820 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74946770-13e5-4777-a645-bb6bee73c277" path="/var/lib/kubelet/pods/74946770-13e5-4777-a645-bb6bee73c277/volumes" Jan 27 15:03:16 crc kubenswrapper[4698]: I0127 15:03:16.161518 4698 scope.go:117] "RemoveContainer" containerID="4df5c0943ab012c25673290e4c14b44b6814a4422d5308e388a20962116a9f96" Jan 27 15:03:16 crc kubenswrapper[4698]: I0127 15:03:16.196456 4698 scope.go:117] "RemoveContainer" containerID="04b54089d35cbda06ca5e8923f174f55591e3add4e7eb6362a6681256322cf0b" Jan 27 15:03:16 crc kubenswrapper[4698]: I0127 15:03:16.233833 4698 scope.go:117] "RemoveContainer" containerID="52df319c8fd5806e1b6d043e0c56391797aa95b270b1a3ecdf734c7dec22e5f1" Jan 27 15:03:16 crc kubenswrapper[4698]: I0127 15:03:16.314473 4698 scope.go:117] "RemoveContainer" containerID="d058386674e08a8c4f0250d995ae1b6ee9fdffdc7441edd08de85c6565e35ff7" Jan 27 15:03:27 crc kubenswrapper[4698]: I0127 15:03:27.451733 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:03:27 crc kubenswrapper[4698]: I0127 15:03:27.452247 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:03:43 crc kubenswrapper[4698]: I0127 15:03:43.041846 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dm2q4"] Jan 27 15:03:43 crc kubenswrapper[4698]: I0127 15:03:43.052973 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dm2q4"] Jan 27 15:03:44 crc kubenswrapper[4698]: I0127 15:03:44.030658 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w54fd"] Jan 27 15:03:44 crc kubenswrapper[4698]: I0127 15:03:44.039845 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hsmhb"] Jan 27 15:03:44 crc kubenswrapper[4698]: I0127 15:03:44.049093 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w54fd"] Jan 27 15:03:44 crc kubenswrapper[4698]: I0127 15:03:44.056925 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hsmhb"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.006714 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ee2c69-0404-4a33-9a9e-9198c5f6bfa2" path="/var/lib/kubelet/pods/62ee2c69-0404-4a33-9a9e-9198c5f6bfa2/volumes" Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.007755 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764e47ae-dc1d-47fd-a528-c2c4d6b672b6" path="/var/lib/kubelet/pods/764e47ae-dc1d-47fd-a528-c2c4d6b672b6/volumes" Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.008313 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f32c91-43ba-4123-bdd2-ee188ea6b9b1" path="/var/lib/kubelet/pods/f0f32c91-43ba-4123-bdd2-ee188ea6b9b1/volumes" Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.033857 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8b6c-account-create-update-g9bb7"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.051138 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bd6a-account-create-update-k79xt"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.073710 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6246-account-create-update-vh2pj"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.095344 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8b6c-account-create-update-g9bb7"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.106778 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6246-account-create-update-vh2pj"] Jan 27 15:03:45 crc kubenswrapper[4698]: I0127 15:03:45.116799 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bd6a-account-create-update-k79xt"] Jan 27 15:03:47 crc kubenswrapper[4698]: I0127 15:03:47.002404 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009c9cd0-9c21-4d68-b1c0-8041ec2fc475" path="/var/lib/kubelet/pods/009c9cd0-9c21-4d68-b1c0-8041ec2fc475/volumes" Jan 27 15:03:47 crc kubenswrapper[4698]: I0127 15:03:47.003393 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009fd100-fc78-40e8-8e85-2c2b14b22e9e" path="/var/lib/kubelet/pods/009fd100-fc78-40e8-8e85-2c2b14b22e9e/volumes" Jan 27 15:03:47 crc kubenswrapper[4698]: I0127 15:03:47.003996 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43111939-6107-4401-b6d6-94265dc21574" path="/var/lib/kubelet/pods/43111939-6107-4401-b6d6-94265dc21574/volumes" Jan 27 15:03:57 crc kubenswrapper[4698]: I0127 15:03:57.452609 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:03:57 crc kubenswrapper[4698]: I0127 15:03:57.453868 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.097535 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:01 crc kubenswrapper[4698]: E0127 15:04:01.098492 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" containerName="keystone-cron" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.098510 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" containerName="keystone-cron" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.098804 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0034b4e9-4bc5-48cb-8fcc-f98858f0fe15" containerName="keystone-cron" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.100449 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.119401 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.257165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdwj4\" (UniqueName: \"kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.257242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.257427 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.359774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdwj4\" (UniqueName: \"kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.359877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.359942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.360573 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.361358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.384514 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdwj4\" (UniqueName: \"kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4\") pod \"community-operators-wncht\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:01 crc kubenswrapper[4698]: I0127 15:04:01.428472 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:02 crc kubenswrapper[4698]: I0127 15:04:02.039239 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:02 crc kubenswrapper[4698]: I0127 15:04:02.858897 4698 generic.go:334] "Generic (PLEG): container finished" podID="44bedc2a-851d-4a49-830b-0339cda60684" containerID="64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339" exitCode=0 Jan 27 15:04:02 crc kubenswrapper[4698]: I0127 15:04:02.858965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerDied","Data":"64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339"} Jan 27 15:04:02 crc kubenswrapper[4698]: I0127 15:04:02.859024 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerStarted","Data":"50fc93ac107fab6412c44833df38070c2d5f48179318a1c18bcd9c86d42648f2"} Jan 27 15:04:04 crc kubenswrapper[4698]: I0127 15:04:04.879285 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerStarted","Data":"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa"} Jan 27 15:04:06 crc kubenswrapper[4698]: I0127 15:04:06.900318 4698 generic.go:334] "Generic (PLEG): container finished" podID="44bedc2a-851d-4a49-830b-0339cda60684" containerID="baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa" exitCode=0 Jan 27 15:04:06 crc kubenswrapper[4698]: I0127 15:04:06.900384 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerDied","Data":"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa"} Jan 27 15:04:07 crc kubenswrapper[4698]: I0127 15:04:07.913516 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerStarted","Data":"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5"} Jan 27 15:04:07 crc kubenswrapper[4698]: I0127 15:04:07.938027 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wncht" podStartSLOduration=2.24274906 podStartE2EDuration="6.93800592s" podCreationTimestamp="2026-01-27 15:04:01 +0000 UTC" firstStartedPulling="2026-01-27 15:04:02.860833906 +0000 UTC m=+2098.537611371" lastFinishedPulling="2026-01-27 15:04:07.556090766 +0000 UTC m=+2103.232868231" observedRunningTime="2026-01-27 15:04:07.930235675 +0000 UTC m=+2103.607013160" watchObservedRunningTime="2026-01-27 15:04:07.93800592 +0000 UTC m=+2103.614783385" Jan 27 15:04:11 crc kubenswrapper[4698]: I0127 15:04:11.429563 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:11 crc kubenswrapper[4698]: I0127 15:04:11.431115 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:11 crc kubenswrapper[4698]: I0127 15:04:11.482590 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.434164 4698 scope.go:117] "RemoveContainer" containerID="b6b8e0715c07a41963e565990299e55a8c9d2831a103fb08278ca746627a2b52" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.775428 4698 scope.go:117] "RemoveContainer" containerID="e4b8822cb5a44281ad0cbb064c96553a13c8d0c5b595f8a4f5fdefd6a848ec65" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.818007 4698 scope.go:117] "RemoveContainer" containerID="85d889f5b0f3054591c2b7e9106ff936667404824318bebf2f46bee57210ea49" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.850306 4698 scope.go:117] "RemoveContainer" containerID="cb2a6af22964f39be2be6e60353451c3e17d6edbcf4dbcafd8fac08e4a9011f2" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.919058 4698 scope.go:117] "RemoveContainer" containerID="a9ae0d3c76bcfaf8208e378d07391f60604654a9ac8d22ca1c1f582c25730434" Jan 27 15:04:16 crc kubenswrapper[4698]: I0127 15:04:16.976878 4698 scope.go:117] "RemoveContainer" containerID="6046701ae2a416bf70ff30db0bc5f23fd015eaab03590656ae2cbeb34ded580a" Jan 27 15:04:21 crc kubenswrapper[4698]: I0127 15:04:21.475725 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:21 crc kubenswrapper[4698]: I0127 15:04:21.534315 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.071023 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wncht" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="registry-server" containerID="cri-o://f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5" gracePeriod=2 Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.809378 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.904452 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content\") pod \"44bedc2a-851d-4a49-830b-0339cda60684\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.904861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdwj4\" (UniqueName: \"kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4\") pod \"44bedc2a-851d-4a49-830b-0339cda60684\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.905041 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities\") pod \"44bedc2a-851d-4a49-830b-0339cda60684\" (UID: \"44bedc2a-851d-4a49-830b-0339cda60684\") " Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.905971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities" (OuterVolumeSpecName: "utilities") pod "44bedc2a-851d-4a49-830b-0339cda60684" (UID: "44bedc2a-851d-4a49-830b-0339cda60684"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.907283 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.911260 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4" (OuterVolumeSpecName: "kube-api-access-vdwj4") pod "44bedc2a-851d-4a49-830b-0339cda60684" (UID: "44bedc2a-851d-4a49-830b-0339cda60684"). InnerVolumeSpecName "kube-api-access-vdwj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:04:22 crc kubenswrapper[4698]: I0127 15:04:22.967710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44bedc2a-851d-4a49-830b-0339cda60684" (UID: "44bedc2a-851d-4a49-830b-0339cda60684"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.010560 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdwj4\" (UniqueName: \"kubernetes.io/projected/44bedc2a-851d-4a49-830b-0339cda60684-kube-api-access-vdwj4\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.010932 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44bedc2a-851d-4a49-830b-0339cda60684-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.083107 4698 generic.go:334] "Generic (PLEG): container finished" podID="44bedc2a-851d-4a49-830b-0339cda60684" containerID="f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5" exitCode=0 Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.083206 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wncht" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.083189 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerDied","Data":"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5"} Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.083255 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wncht" event={"ID":"44bedc2a-851d-4a49-830b-0339cda60684","Type":"ContainerDied","Data":"50fc93ac107fab6412c44833df38070c2d5f48179318a1c18bcd9c86d42648f2"} Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.083309 4698 scope.go:117] "RemoveContainer" containerID="f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.111849 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.113228 4698 scope.go:117] "RemoveContainer" containerID="baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.120828 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wncht"] Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.135578 4698 scope.go:117] "RemoveContainer" containerID="64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.180928 4698 scope.go:117] "RemoveContainer" containerID="f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5" Jan 27 15:04:23 crc kubenswrapper[4698]: E0127 15:04:23.181467 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5\": container with ID starting with f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5 not found: ID does not exist" containerID="f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.181513 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5"} err="failed to get container status \"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5\": rpc error: code = NotFound desc = could not find container \"f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5\": container with ID starting with f21e80e14b92e83529742a9e52e45a69a608ce5325adca208bc8ec4702d9f4a5 not found: ID does not exist" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.181542 4698 scope.go:117] "RemoveContainer" containerID="baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa" Jan 27 15:04:23 crc kubenswrapper[4698]: E0127 15:04:23.181942 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa\": container with ID starting with baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa not found: ID does not exist" containerID="baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.181980 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa"} err="failed to get container status \"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa\": rpc error: code = NotFound desc = could not find container \"baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa\": container with ID starting with baf460045e9a1d8505e44a658eb9be9a1d404e4edd7052a7d33098ab2f1d90aa not found: ID does not exist" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.182009 4698 scope.go:117] "RemoveContainer" containerID="64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339" Jan 27 15:04:23 crc kubenswrapper[4698]: E0127 15:04:23.182544 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339\": container with ID starting with 64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339 not found: ID does not exist" containerID="64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339" Jan 27 15:04:23 crc kubenswrapper[4698]: I0127 15:04:23.182572 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339"} err="failed to get container status \"64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339\": rpc error: code = NotFound desc = could not find container \"64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339\": container with ID starting with 64dc24b0c0b3d347d0f7ab940559b29a67e1754ef5e6c421e61acf1cfcd5f339 not found: ID does not exist" Jan 27 15:04:25 crc kubenswrapper[4698]: I0127 15:04:25.005116 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bedc2a-851d-4a49-830b-0339cda60684" path="/var/lib/kubelet/pods/44bedc2a-851d-4a49-830b-0339cda60684/volumes" Jan 27 15:04:27 crc kubenswrapper[4698]: I0127 15:04:27.451864 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:04:27 crc kubenswrapper[4698]: I0127 15:04:27.452176 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:04:27 crc kubenswrapper[4698]: I0127 15:04:27.452222 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:04:27 crc kubenswrapper[4698]: I0127 15:04:27.453100 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:04:27 crc kubenswrapper[4698]: I0127 15:04:27.453157 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02" gracePeriod=600 Jan 27 15:04:28 crc kubenswrapper[4698]: I0127 15:04:28.141483 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02" exitCode=0 Jan 27 15:04:28 crc kubenswrapper[4698]: I0127 15:04:28.141851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02"} Jan 27 15:04:28 crc kubenswrapper[4698]: I0127 15:04:28.141890 4698 scope.go:117] "RemoveContainer" containerID="b5e738b87b0ce5279fe16e0f062d8436b14ec328939703048a41953bc1584a16" Jan 27 15:04:29 crc kubenswrapper[4698]: I0127 15:04:29.046089 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vp87x"] Jan 27 15:04:29 crc kubenswrapper[4698]: I0127 15:04:29.059018 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vp87x"] Jan 27 15:04:29 crc kubenswrapper[4698]: I0127 15:04:29.151161 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8"} Jan 27 15:04:31 crc kubenswrapper[4698]: I0127 15:04:31.003006 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992034d3-1c4d-4e83-9641-12543dd3df24" path="/var/lib/kubelet/pods/992034d3-1c4d-4e83-9641-12543dd3df24/volumes" Jan 27 15:05:17 crc kubenswrapper[4698]: I0127 15:05:17.121521 4698 scope.go:117] "RemoveContainer" containerID="123d4d06a0f8addc043f78310758be0fb0de464dcf972f4437ef480c85eff7a4" Jan 27 15:06:00 crc kubenswrapper[4698]: I0127 15:06:00.065984 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x8m8s"] Jan 27 15:06:00 crc kubenswrapper[4698]: I0127 15:06:00.076179 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x8m8s"] Jan 27 15:06:01 crc kubenswrapper[4698]: I0127 15:06:01.005177 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a268a3b-da75-4a08-a9a3-b097f2066a27" path="/var/lib/kubelet/pods/9a268a3b-da75-4a08-a9a3-b097f2066a27/volumes" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.485299 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:03 crc kubenswrapper[4698]: E0127 15:06:03.486072 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="registry-server" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.486090 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="registry-server" Jan 27 15:06:03 crc kubenswrapper[4698]: E0127 15:06:03.486117 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="extract-utilities" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.486125 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="extract-utilities" Jan 27 15:06:03 crc kubenswrapper[4698]: E0127 15:06:03.486144 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="extract-content" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.486152 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="extract-content" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.486369 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bedc2a-851d-4a49-830b-0339cda60684" containerName="registry-server" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.489929 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.501276 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.639209 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gt4\" (UniqueName: \"kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.639307 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.639792 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.742015 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gt4\" (UniqueName: \"kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.742107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.742166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.742616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.743189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.767537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gt4\" (UniqueName: \"kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4\") pod \"certified-operators-7gdv5\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:03 crc kubenswrapper[4698]: I0127 15:06:03.809531 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:04 crc kubenswrapper[4698]: I0127 15:06:04.358628 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:05 crc kubenswrapper[4698]: I0127 15:06:05.059617 4698 generic.go:334] "Generic (PLEG): container finished" podID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerID="41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e" exitCode=0 Jan 27 15:06:05 crc kubenswrapper[4698]: I0127 15:06:05.059848 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerDied","Data":"41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e"} Jan 27 15:06:05 crc kubenswrapper[4698]: I0127 15:06:05.060227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerStarted","Data":"ff04e4bcc42f1bb7aead6a67b0a353857218846e8d50ac6b30315b0b2cafdc18"} Jan 27 15:06:05 crc kubenswrapper[4698]: I0127 15:06:05.063005 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:06:06 crc kubenswrapper[4698]: I0127 15:06:06.072740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerStarted","Data":"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9"} Jan 27 15:06:09 crc kubenswrapper[4698]: I0127 15:06:09.102612 4698 generic.go:334] "Generic (PLEG): container finished" podID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerID="0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9" exitCode=0 Jan 27 15:06:09 crc kubenswrapper[4698]: I0127 15:06:09.102685 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerDied","Data":"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9"} Jan 27 15:06:10 crc kubenswrapper[4698]: I0127 15:06:10.116031 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerStarted","Data":"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a"} Jan 27 15:06:10 crc kubenswrapper[4698]: I0127 15:06:10.144076 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gdv5" podStartSLOduration=2.59920527 podStartE2EDuration="7.144050961s" podCreationTimestamp="2026-01-27 15:06:03 +0000 UTC" firstStartedPulling="2026-01-27 15:06:05.06235372 +0000 UTC m=+2220.739131185" lastFinishedPulling="2026-01-27 15:06:09.607199411 +0000 UTC m=+2225.283976876" observedRunningTime="2026-01-27 15:06:10.134861529 +0000 UTC m=+2225.811638984" watchObservedRunningTime="2026-01-27 15:06:10.144050961 +0000 UTC m=+2225.820828426" Jan 27 15:06:13 crc kubenswrapper[4698]: I0127 15:06:13.810433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:13 crc kubenswrapper[4698]: I0127 15:06:13.810833 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:13 crc kubenswrapper[4698]: I0127 15:06:13.857455 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:14 crc kubenswrapper[4698]: I0127 15:06:14.191516 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:14 crc kubenswrapper[4698]: I0127 15:06:14.242869 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.165466 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gdv5" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="registry-server" containerID="cri-o://427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a" gracePeriod=2 Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.665806 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.818351 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content\") pod \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.818421 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gt4\" (UniqueName: \"kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4\") pod \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.818547 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities\") pod \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\" (UID: \"1f227fb9-9746-4596-9b2c-a38f3cd113fd\") " Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.819502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities" (OuterVolumeSpecName: "utilities") pod "1f227fb9-9746-4596-9b2c-a38f3cd113fd" (UID: "1f227fb9-9746-4596-9b2c-a38f3cd113fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.826007 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4" (OuterVolumeSpecName: "kube-api-access-m4gt4") pod "1f227fb9-9746-4596-9b2c-a38f3cd113fd" (UID: "1f227fb9-9746-4596-9b2c-a38f3cd113fd"). InnerVolumeSpecName "kube-api-access-m4gt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.874594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f227fb9-9746-4596-9b2c-a38f3cd113fd" (UID: "1f227fb9-9746-4596-9b2c-a38f3cd113fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.921786 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.921839 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gt4\" (UniqueName: \"kubernetes.io/projected/1f227fb9-9746-4596-9b2c-a38f3cd113fd-kube-api-access-m4gt4\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:16 crc kubenswrapper[4698]: I0127 15:06:16.921855 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f227fb9-9746-4596-9b2c-a38f3cd113fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.176723 4698 generic.go:334] "Generic (PLEG): container finished" podID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerID="427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a" exitCode=0 Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.176775 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerDied","Data":"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a"} Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.176807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdv5" event={"ID":"1f227fb9-9746-4596-9b2c-a38f3cd113fd","Type":"ContainerDied","Data":"ff04e4bcc42f1bb7aead6a67b0a353857218846e8d50ac6b30315b0b2cafdc18"} Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.176828 4698 scope.go:117] "RemoveContainer" containerID="427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.177031 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdv5" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.197713 4698 scope.go:117] "RemoveContainer" containerID="157b60d7a17461e1c42c81e8fd2f48839e2e141d834579077f1f226aff7c96da" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.207896 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.216347 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gdv5"] Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.219219 4698 scope.go:117] "RemoveContainer" containerID="0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.293516 4698 scope.go:117] "RemoveContainer" containerID="41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.321223 4698 scope.go:117] "RemoveContainer" containerID="427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a" Jan 27 15:06:17 crc kubenswrapper[4698]: E0127 15:06:17.322495 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a\": container with ID starting with 427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a not found: ID does not exist" containerID="427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.322542 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a"} err="failed to get container status \"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a\": rpc error: code = NotFound desc = could not find container \"427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a\": container with ID starting with 427527b2ff28605ad5c8d5449fe5d869dd4844b0f4d366462982e10795e0b02a not found: ID does not exist" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.322573 4698 scope.go:117] "RemoveContainer" containerID="0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9" Jan 27 15:06:17 crc kubenswrapper[4698]: E0127 15:06:17.323175 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9\": container with ID starting with 0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9 not found: ID does not exist" containerID="0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.323211 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9"} err="failed to get container status \"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9\": rpc error: code = NotFound desc = could not find container \"0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9\": container with ID starting with 0d5459c97790c6d8d8699a1e3f0f22db1af717b51226208f654670f9cd0d6ce9 not found: ID does not exist" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.323230 4698 scope.go:117] "RemoveContainer" containerID="41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e" Jan 27 15:06:17 crc kubenswrapper[4698]: E0127 15:06:17.324467 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e\": container with ID starting with 41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e not found: ID does not exist" containerID="41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e" Jan 27 15:06:17 crc kubenswrapper[4698]: I0127 15:06:17.324498 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e"} err="failed to get container status \"41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e\": rpc error: code = NotFound desc = could not find container \"41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e\": container with ID starting with 41b3f49396b73e9dea4c5a10015e426f4674006b044b1a77a3b2e1581f5aa39e not found: ID does not exist" Jan 27 15:06:19 crc kubenswrapper[4698]: I0127 15:06:19.008567 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" path="/var/lib/kubelet/pods/1f227fb9-9746-4596-9b2c-a38f3cd113fd/volumes" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.251202 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:31 crc kubenswrapper[4698]: E0127 15:06:31.252349 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="extract-utilities" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.252365 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="extract-utilities" Jan 27 15:06:31 crc kubenswrapper[4698]: E0127 15:06:31.252400 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="extract-content" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.252406 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="extract-content" Jan 27 15:06:31 crc kubenswrapper[4698]: E0127 15:06:31.252422 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="registry-server" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.252428 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="registry-server" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.252624 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f227fb9-9746-4596-9b2c-a38f3cd113fd" containerName="registry-server" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.254198 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.265073 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.341529 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrc9\" (UniqueName: \"kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.341773 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.341922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.443867 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.444120 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrc9\" (UniqueName: \"kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.444173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.444465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.444552 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.469107 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrc9\" (UniqueName: \"kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9\") pod \"redhat-marketplace-9bv2r\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:31 crc kubenswrapper[4698]: I0127 15:06:31.585154 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:32 crc kubenswrapper[4698]: I0127 15:06:32.102769 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:32 crc kubenswrapper[4698]: I0127 15:06:32.313428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerStarted","Data":"11779bf1e7f27f16e5c8ded6bab054d086bc55c0c1cdab3001334605a59216f2"} Jan 27 15:06:33 crc kubenswrapper[4698]: I0127 15:06:33.326018 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc436018-3da7-4bfb-b380-9e54447a86af" containerID="e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce" exitCode=0 Jan 27 15:06:33 crc kubenswrapper[4698]: I0127 15:06:33.326092 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerDied","Data":"e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce"} Jan 27 15:06:35 crc kubenswrapper[4698]: I0127 15:06:35.347431 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc436018-3da7-4bfb-b380-9e54447a86af" containerID="8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec" exitCode=0 Jan 27 15:06:35 crc kubenswrapper[4698]: I0127 15:06:35.347932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerDied","Data":"8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec"} Jan 27 15:06:37 crc kubenswrapper[4698]: I0127 15:06:37.051942 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgm7r"] Jan 27 15:06:37 crc kubenswrapper[4698]: I0127 15:06:37.067525 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgm7r"] Jan 27 15:06:38 crc kubenswrapper[4698]: I0127 15:06:38.386415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerStarted","Data":"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e"} Jan 27 15:06:38 crc kubenswrapper[4698]: I0127 15:06:38.432529 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9bv2r" podStartSLOduration=3.332911736 podStartE2EDuration="7.432508339s" podCreationTimestamp="2026-01-27 15:06:31 +0000 UTC" firstStartedPulling="2026-01-27 15:06:33.32898117 +0000 UTC m=+2249.005758655" lastFinishedPulling="2026-01-27 15:06:37.428577793 +0000 UTC m=+2253.105355258" observedRunningTime="2026-01-27 15:06:38.403813112 +0000 UTC m=+2254.080590597" watchObservedRunningTime="2026-01-27 15:06:38.432508339 +0000 UTC m=+2254.109285804" Jan 27 15:06:39 crc kubenswrapper[4698]: I0127 15:06:39.003953 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c865f75e-a196-4b4c-ba96-383654e3c295" path="/var/lib/kubelet/pods/c865f75e-a196-4b4c-ba96-383654e3c295/volumes" Jan 27 15:06:41 crc kubenswrapper[4698]: I0127 15:06:41.586005 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:41 crc kubenswrapper[4698]: I0127 15:06:41.587062 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:41 crc kubenswrapper[4698]: I0127 15:06:41.634747 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:42 crc kubenswrapper[4698]: I0127 15:06:42.478497 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:43 crc kubenswrapper[4698]: I0127 15:06:43.237681 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:44 crc kubenswrapper[4698]: I0127 15:06:44.446267 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9bv2r" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="registry-server" containerID="cri-o://619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e" gracePeriod=2 Jan 27 15:06:44 crc kubenswrapper[4698]: I0127 15:06:44.912192 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.027762 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities\") pod \"bc436018-3da7-4bfb-b380-9e54447a86af\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.027990 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content\") pod \"bc436018-3da7-4bfb-b380-9e54447a86af\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.028085 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrc9\" (UniqueName: \"kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9\") pod \"bc436018-3da7-4bfb-b380-9e54447a86af\" (UID: \"bc436018-3da7-4bfb-b380-9e54447a86af\") " Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.028835 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities" (OuterVolumeSpecName: "utilities") pod "bc436018-3da7-4bfb-b380-9e54447a86af" (UID: "bc436018-3da7-4bfb-b380-9e54447a86af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.035070 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9" (OuterVolumeSpecName: "kube-api-access-znrc9") pod "bc436018-3da7-4bfb-b380-9e54447a86af" (UID: "bc436018-3da7-4bfb-b380-9e54447a86af"). InnerVolumeSpecName "kube-api-access-znrc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.060592 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc436018-3da7-4bfb-b380-9e54447a86af" (UID: "bc436018-3da7-4bfb-b380-9e54447a86af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.132021 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.132073 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc436018-3da7-4bfb-b380-9e54447a86af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.132089 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrc9\" (UniqueName: \"kubernetes.io/projected/bc436018-3da7-4bfb-b380-9e54447a86af-kube-api-access-znrc9\") on node \"crc\" DevicePath \"\"" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.457888 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc436018-3da7-4bfb-b380-9e54447a86af" containerID="619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e" exitCode=0 Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.457953 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerDied","Data":"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e"} Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.458003 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bv2r" event={"ID":"bc436018-3da7-4bfb-b380-9e54447a86af","Type":"ContainerDied","Data":"11779bf1e7f27f16e5c8ded6bab054d086bc55c0c1cdab3001334605a59216f2"} Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.458010 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bv2r" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.458026 4698 scope.go:117] "RemoveContainer" containerID="619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.496490 4698 scope.go:117] "RemoveContainer" containerID="8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.510442 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.521011 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bv2r"] Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.531533 4698 scope.go:117] "RemoveContainer" containerID="e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.598019 4698 scope.go:117] "RemoveContainer" containerID="619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e" Jan 27 15:06:45 crc kubenswrapper[4698]: E0127 15:06:45.598450 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e\": container with ID starting with 619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e not found: ID does not exist" containerID="619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.598484 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e"} err="failed to get container status \"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e\": rpc error: code = NotFound desc = could not find container \"619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e\": container with ID starting with 619894d730aaef0188fe9c8fcc0a4c6178acae5b8c68e5a0e96f951e98b3a54e not found: ID does not exist" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.598505 4698 scope.go:117] "RemoveContainer" containerID="8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec" Jan 27 15:06:45 crc kubenswrapper[4698]: E0127 15:06:45.598787 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec\": container with ID starting with 8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec not found: ID does not exist" containerID="8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.599211 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec"} err="failed to get container status \"8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec\": rpc error: code = NotFound desc = could not find container \"8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec\": container with ID starting with 8235751c7b64706bbcd485a595aa9a3e9b3960c9428455af0e2276fa3de98fec not found: ID does not exist" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.599236 4698 scope.go:117] "RemoveContainer" containerID="e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce" Jan 27 15:06:45 crc kubenswrapper[4698]: E0127 15:06:45.599665 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce\": container with ID starting with e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce not found: ID does not exist" containerID="e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce" Jan 27 15:06:45 crc kubenswrapper[4698]: I0127 15:06:45.599722 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce"} err="failed to get container status \"e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce\": rpc error: code = NotFound desc = could not find container \"e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce\": container with ID starting with e56c33332dda11abd6522d544ef628bb0f02d4a9d3e3019c11fd46e8db9368ce not found: ID does not exist" Jan 27 15:06:47 crc kubenswrapper[4698]: I0127 15:06:47.022620 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" path="/var/lib/kubelet/pods/bc436018-3da7-4bfb-b380-9e54447a86af/volumes" Jan 27 15:06:57 crc kubenswrapper[4698]: I0127 15:06:57.452130 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:06:57 crc kubenswrapper[4698]: I0127 15:06:57.452677 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:06:58 crc kubenswrapper[4698]: I0127 15:06:58.049958 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85rx4"] Jan 27 15:06:58 crc kubenswrapper[4698]: I0127 15:06:58.059091 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85rx4"] Jan 27 15:06:59 crc kubenswrapper[4698]: I0127 15:06:59.004668 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24589df3-de69-4037-a263-2c08e46fc8ce" path="/var/lib/kubelet/pods/24589df3-de69-4037-a263-2c08e46fc8ce/volumes" Jan 27 15:07:17 crc kubenswrapper[4698]: I0127 15:07:17.283936 4698 scope.go:117] "RemoveContainer" containerID="746bd03d270eda154fe1289f278408ec545c697eed0de8744ca89b34b7eee904" Jan 27 15:07:17 crc kubenswrapper[4698]: I0127 15:07:17.325846 4698 scope.go:117] "RemoveContainer" containerID="b279f31de2d88d810b9d3b00bccd2c9b249ab8c4f36e1205b3db42a12dec02ee" Jan 27 15:07:21 crc kubenswrapper[4698]: I0127 15:07:21.050761 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf2ls"] Jan 27 15:07:21 crc kubenswrapper[4698]: I0127 15:07:21.070497 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf2ls"] Jan 27 15:07:23 crc kubenswrapper[4698]: I0127 15:07:23.006176 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02bd3e6-3943-4d72-a596-ad3b1ca55805" path="/var/lib/kubelet/pods/d02bd3e6-3943-4d72-a596-ad3b1ca55805/volumes" Jan 27 15:07:27 crc kubenswrapper[4698]: I0127 15:07:27.452106 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:07:27 crc kubenswrapper[4698]: I0127 15:07:27.453248 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:07:57 crc kubenswrapper[4698]: I0127 15:07:57.452517 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:07:57 crc kubenswrapper[4698]: I0127 15:07:57.453056 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:07:57 crc kubenswrapper[4698]: I0127 15:07:57.453100 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:07:57 crc kubenswrapper[4698]: I0127 15:07:57.453852 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:07:57 crc kubenswrapper[4698]: I0127 15:07:57.453908 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" gracePeriod=600 Jan 27 15:07:57 crc kubenswrapper[4698]: E0127 15:07:57.657556 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:07:58 crc kubenswrapper[4698]: I0127 15:07:58.144628 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" exitCode=0 Jan 27 15:07:58 crc kubenswrapper[4698]: I0127 15:07:58.144696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8"} Jan 27 15:07:58 crc kubenswrapper[4698]: I0127 15:07:58.144740 4698 scope.go:117] "RemoveContainer" containerID="cfb3ddd3f31bb32b30aa65cbbe04fb40e1a0fd8b8faea20785560325760fdc02" Jan 27 15:07:58 crc kubenswrapper[4698]: I0127 15:07:58.145491 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:07:58 crc kubenswrapper[4698]: E0127 15:07:58.145788 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:08:10 crc kubenswrapper[4698]: I0127 15:08:10.992973 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:08:10 crc kubenswrapper[4698]: E0127 15:08:10.994052 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:08:17 crc kubenswrapper[4698]: I0127 15:08:17.538199 4698 scope.go:117] "RemoveContainer" containerID="12c7a9f705d12e6fb43fab18d38df89dc45d0ea718b4bd098e536c0b5407f07e" Jan 27 15:08:23 crc kubenswrapper[4698]: I0127 15:08:23.992918 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:08:23 crc kubenswrapper[4698]: E0127 15:08:23.993818 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:08:35 crc kubenswrapper[4698]: I0127 15:08:35.992183 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:08:35 crc kubenswrapper[4698]: E0127 15:08:35.992865 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:08:46 crc kubenswrapper[4698]: I0127 15:08:46.992727 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:08:46 crc kubenswrapper[4698]: E0127 15:08:46.993514 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:08:59 crc kubenswrapper[4698]: I0127 15:08:59.991661 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:08:59 crc kubenswrapper[4698]: E0127 15:08:59.992439 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:09:15 crc kubenswrapper[4698]: I0127 15:09:14.999438 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:09:15 crc kubenswrapper[4698]: E0127 15:09:15.000301 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:09:26 crc kubenswrapper[4698]: I0127 15:09:26.992584 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:09:26 crc kubenswrapper[4698]: E0127 15:09:26.994571 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:09:40 crc kubenswrapper[4698]: I0127 15:09:40.992048 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:09:40 crc kubenswrapper[4698]: E0127 15:09:40.992766 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:09:53 crc kubenswrapper[4698]: I0127 15:09:53.992531 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:09:53 crc kubenswrapper[4698]: E0127 15:09:53.993332 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:10:08 crc kubenswrapper[4698]: I0127 15:10:08.993271 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:10:08 crc kubenswrapper[4698]: E0127 15:10:08.994147 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.629152 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:21 crc kubenswrapper[4698]: E0127 15:10:21.630189 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="registry-server" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.630205 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="registry-server" Jan 27 15:10:21 crc kubenswrapper[4698]: E0127 15:10:21.630248 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="extract-content" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.630256 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="extract-content" Jan 27 15:10:21 crc kubenswrapper[4698]: E0127 15:10:21.630273 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="extract-utilities" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.630281 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="extract-utilities" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.630513 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc436018-3da7-4bfb-b380-9e54447a86af" containerName="registry-server" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.636216 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.667810 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.749363 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.749460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwff\" (UniqueName: \"kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.749660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.851928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.852091 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.852142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwff\" (UniqueName: \"kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.853122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.853336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.881704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwff\" (UniqueName: \"kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff\") pod \"redhat-operators-bxlpd\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.966209 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:21 crc kubenswrapper[4698]: I0127 15:10:21.993212 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:10:21 crc kubenswrapper[4698]: E0127 15:10:21.993559 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:10:22 crc kubenswrapper[4698]: I0127 15:10:22.491451 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:22 crc kubenswrapper[4698]: I0127 15:10:22.676252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerStarted","Data":"7eb18e55fb6fdc875741957a28ae6a825aea14981781aef21193f6aebcd1cfca"} Jan 27 15:10:23 crc kubenswrapper[4698]: I0127 15:10:23.687763 4698 generic.go:334] "Generic (PLEG): container finished" podID="99eb4030-4950-4797-bcf5-039069e7e51b" containerID="73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e" exitCode=0 Jan 27 15:10:23 crc kubenswrapper[4698]: I0127 15:10:23.687808 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerDied","Data":"73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e"} Jan 27 15:10:24 crc kubenswrapper[4698]: I0127 15:10:24.703888 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerStarted","Data":"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013"} Jan 27 15:10:25 crc kubenswrapper[4698]: E0127 15:10:25.500077 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99eb4030_4950_4797_bcf5_039069e7e51b.slice/crio-conmon-038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013.scope\": RecentStats: unable to find data in memory cache]" Jan 27 15:10:25 crc kubenswrapper[4698]: I0127 15:10:25.718789 4698 generic.go:334] "Generic (PLEG): container finished" podID="99eb4030-4950-4797-bcf5-039069e7e51b" containerID="038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013" exitCode=0 Jan 27 15:10:25 crc kubenswrapper[4698]: I0127 15:10:25.718869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerDied","Data":"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013"} Jan 27 15:10:31 crc kubenswrapper[4698]: I0127 15:10:31.768220 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerStarted","Data":"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60"} Jan 27 15:10:31 crc kubenswrapper[4698]: I0127 15:10:31.786125 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bxlpd" podStartSLOduration=3.5648714200000002 podStartE2EDuration="10.786104722s" podCreationTimestamp="2026-01-27 15:10:21 +0000 UTC" firstStartedPulling="2026-01-27 15:10:23.68990018 +0000 UTC m=+2479.366677645" lastFinishedPulling="2026-01-27 15:10:30.911133482 +0000 UTC m=+2486.587910947" observedRunningTime="2026-01-27 15:10:31.78448584 +0000 UTC m=+2487.461263315" watchObservedRunningTime="2026-01-27 15:10:31.786104722 +0000 UTC m=+2487.462882197" Jan 27 15:10:31 crc kubenswrapper[4698]: I0127 15:10:31.967476 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:31 crc kubenswrapper[4698]: I0127 15:10:31.967522 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:33 crc kubenswrapper[4698]: I0127 15:10:33.012421 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bxlpd" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="registry-server" probeResult="failure" output=< Jan 27 15:10:33 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:10:33 crc kubenswrapper[4698]: > Jan 27 15:10:36 crc kubenswrapper[4698]: I0127 15:10:36.992584 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:10:36 crc kubenswrapper[4698]: E0127 15:10:36.993163 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:10:42 crc kubenswrapper[4698]: I0127 15:10:42.023514 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:42 crc kubenswrapper[4698]: I0127 15:10:42.077296 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:42 crc kubenswrapper[4698]: I0127 15:10:42.261197 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:43 crc kubenswrapper[4698]: I0127 15:10:43.886969 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bxlpd" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="registry-server" containerID="cri-o://2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60" gracePeriod=2 Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.555305 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.741498 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhwff\" (UniqueName: \"kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff\") pod \"99eb4030-4950-4797-bcf5-039069e7e51b\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.741555 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities\") pod \"99eb4030-4950-4797-bcf5-039069e7e51b\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.741757 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content\") pod \"99eb4030-4950-4797-bcf5-039069e7e51b\" (UID: \"99eb4030-4950-4797-bcf5-039069e7e51b\") " Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.743787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities" (OuterVolumeSpecName: "utilities") pod "99eb4030-4950-4797-bcf5-039069e7e51b" (UID: "99eb4030-4950-4797-bcf5-039069e7e51b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.763257 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff" (OuterVolumeSpecName: "kube-api-access-zhwff") pod "99eb4030-4950-4797-bcf5-039069e7e51b" (UID: "99eb4030-4950-4797-bcf5-039069e7e51b"). InnerVolumeSpecName "kube-api-access-zhwff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.844174 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhwff\" (UniqueName: \"kubernetes.io/projected/99eb4030-4950-4797-bcf5-039069e7e51b-kube-api-access-zhwff\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.844240 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.861949 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99eb4030-4950-4797-bcf5-039069e7e51b" (UID: "99eb4030-4950-4797-bcf5-039069e7e51b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.904898 4698 generic.go:334] "Generic (PLEG): container finished" podID="99eb4030-4950-4797-bcf5-039069e7e51b" containerID="2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60" exitCode=0 Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.904961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerDied","Data":"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60"} Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.904999 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxlpd" event={"ID":"99eb4030-4950-4797-bcf5-039069e7e51b","Type":"ContainerDied","Data":"7eb18e55fb6fdc875741957a28ae6a825aea14981781aef21193f6aebcd1cfca"} Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.905053 4698 scope.go:117] "RemoveContainer" containerID="2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.905565 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxlpd" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.946951 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99eb4030-4950-4797-bcf5-039069e7e51b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.948102 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.949138 4698 scope.go:117] "RemoveContainer" containerID="038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013" Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.961939 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bxlpd"] Jan 27 15:10:44 crc kubenswrapper[4698]: I0127 15:10:44.995944 4698 scope.go:117] "RemoveContainer" containerID="73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.010497 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" path="/var/lib/kubelet/pods/99eb4030-4950-4797-bcf5-039069e7e51b/volumes" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.030624 4698 scope.go:117] "RemoveContainer" containerID="2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60" Jan 27 15:10:45 crc kubenswrapper[4698]: E0127 15:10:45.031258 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60\": container with ID starting with 2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60 not found: ID does not exist" containerID="2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.031325 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60"} err="failed to get container status \"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60\": rpc error: code = NotFound desc = could not find container \"2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60\": container with ID starting with 2db9f73af8f8c05ee7d124b805368b46dbafcb69a99e57f4fe9ac3c328f75d60 not found: ID does not exist" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.031357 4698 scope.go:117] "RemoveContainer" containerID="038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013" Jan 27 15:10:45 crc kubenswrapper[4698]: E0127 15:10:45.031810 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013\": container with ID starting with 038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013 not found: ID does not exist" containerID="038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.031846 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013"} err="failed to get container status \"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013\": rpc error: code = NotFound desc = could not find container \"038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013\": container with ID starting with 038f100ce91fdc24dc7495aa8bb7e462299f02df22ed67ff7a63be7f583a3013 not found: ID does not exist" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.031862 4698 scope.go:117] "RemoveContainer" containerID="73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e" Jan 27 15:10:45 crc kubenswrapper[4698]: E0127 15:10:45.032221 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e\": container with ID starting with 73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e not found: ID does not exist" containerID="73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e" Jan 27 15:10:45 crc kubenswrapper[4698]: I0127 15:10:45.032243 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e"} err="failed to get container status \"73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e\": rpc error: code = NotFound desc = could not find container \"73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e\": container with ID starting with 73123e094d9cecfb623b1a3aa71898d1d1e5b01898bc437aa56244f9734bef8e not found: ID does not exist" Jan 27 15:10:49 crc kubenswrapper[4698]: I0127 15:10:49.993546 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:10:49 crc kubenswrapper[4698]: E0127 15:10:49.994598 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:11:01 crc kubenswrapper[4698]: I0127 15:11:01.992715 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:11:01 crc kubenswrapper[4698]: E0127 15:11:01.993616 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:11:12 crc kubenswrapper[4698]: I0127 15:11:12.994259 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:11:12 crc kubenswrapper[4698]: E0127 15:11:12.995823 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:11:26 crc kubenswrapper[4698]: I0127 15:11:26.991950 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:11:26 crc kubenswrapper[4698]: E0127 15:11:26.992688 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:11:41 crc kubenswrapper[4698]: I0127 15:11:41.992607 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:11:41 crc kubenswrapper[4698]: E0127 15:11:41.993526 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:11:56 crc kubenswrapper[4698]: I0127 15:11:56.992501 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:11:56 crc kubenswrapper[4698]: E0127 15:11:56.997145 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:12:11 crc kubenswrapper[4698]: I0127 15:12:11.015421 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:12:11 crc kubenswrapper[4698]: E0127 15:12:11.016548 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:12:25 crc kubenswrapper[4698]: I0127 15:12:24.999611 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:12:25 crc kubenswrapper[4698]: E0127 15:12:25.000537 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:12:38 crc kubenswrapper[4698]: I0127 15:12:38.993475 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:12:38 crc kubenswrapper[4698]: E0127 15:12:38.995044 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:12:53 crc kubenswrapper[4698]: I0127 15:12:53.992998 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:12:53 crc kubenswrapper[4698]: E0127 15:12:53.994114 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:13:05 crc kubenswrapper[4698]: I0127 15:13:05.000209 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:13:06 crc kubenswrapper[4698]: I0127 15:13:06.198429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca"} Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.156616 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9"] Jan 27 15:15:00 crc kubenswrapper[4698]: E0127 15:15:00.157502 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.157515 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4698]: E0127 15:15:00.157539 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.157545 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4698]: E0127 15:15:00.157555 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.157561 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.157765 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="99eb4030-4950-4797-bcf5-039069e7e51b" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.158475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.165846 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.165880 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.174992 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9"] Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.274764 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jx9\" (UniqueName: \"kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.275031 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.275063 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.376823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.376904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.376981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24jx9\" (UniqueName: \"kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.378407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.386437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.398557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24jx9\" (UniqueName: \"kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9\") pod \"collect-profiles-29492115-nrql9\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.490298 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:00 crc kubenswrapper[4698]: I0127 15:15:00.991968 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9"] Jan 27 15:15:01 crc kubenswrapper[4698]: I0127 15:15:01.267568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" event={"ID":"20e3c825-fe5e-4a75-b0ce-7134bc91a87e","Type":"ContainerStarted","Data":"05292ef455532463d7d59b039187ad65596e628fe4c88ea1c803f406580c3c70"} Jan 27 15:15:01 crc kubenswrapper[4698]: I0127 15:15:01.268848 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" event={"ID":"20e3c825-fe5e-4a75-b0ce-7134bc91a87e","Type":"ContainerStarted","Data":"69bc9e36873107034cd8bd2bb4d453d0ffeb1a73e1a09e09305faf9ffe427a67"} Jan 27 15:15:01 crc kubenswrapper[4698]: I0127 15:15:01.294254 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" podStartSLOduration=1.294207424 podStartE2EDuration="1.294207424s" podCreationTimestamp="2026-01-27 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:15:01.284675724 +0000 UTC m=+2756.961453219" watchObservedRunningTime="2026-01-27 15:15:01.294207424 +0000 UTC m=+2756.970984889" Jan 27 15:15:02 crc kubenswrapper[4698]: I0127 15:15:02.280357 4698 generic.go:334] "Generic (PLEG): container finished" podID="20e3c825-fe5e-4a75-b0ce-7134bc91a87e" containerID="05292ef455532463d7d59b039187ad65596e628fe4c88ea1c803f406580c3c70" exitCode=0 Jan 27 15:15:02 crc kubenswrapper[4698]: I0127 15:15:02.280468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" event={"ID":"20e3c825-fe5e-4a75-b0ce-7134bc91a87e","Type":"ContainerDied","Data":"05292ef455532463d7d59b039187ad65596e628fe4c88ea1c803f406580c3c70"} Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.704287 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.759918 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume\") pod \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.760016 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24jx9\" (UniqueName: \"kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9\") pod \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.760293 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume\") pod \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\" (UID: \"20e3c825-fe5e-4a75-b0ce-7134bc91a87e\") " Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.761246 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume" (OuterVolumeSpecName: "config-volume") pod "20e3c825-fe5e-4a75-b0ce-7134bc91a87e" (UID: "20e3c825-fe5e-4a75-b0ce-7134bc91a87e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.768261 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20e3c825-fe5e-4a75-b0ce-7134bc91a87e" (UID: "20e3c825-fe5e-4a75-b0ce-7134bc91a87e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.768487 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9" (OuterVolumeSpecName: "kube-api-access-24jx9") pod "20e3c825-fe5e-4a75-b0ce-7134bc91a87e" (UID: "20e3c825-fe5e-4a75-b0ce-7134bc91a87e"). InnerVolumeSpecName "kube-api-access-24jx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.863340 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.863376 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:03 crc kubenswrapper[4698]: I0127 15:15:03.863385 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24jx9\" (UniqueName: \"kubernetes.io/projected/20e3c825-fe5e-4a75-b0ce-7134bc91a87e-kube-api-access-24jx9\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4698]: I0127 15:15:04.306881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" event={"ID":"20e3c825-fe5e-4a75-b0ce-7134bc91a87e","Type":"ContainerDied","Data":"69bc9e36873107034cd8bd2bb4d453d0ffeb1a73e1a09e09305faf9ffe427a67"} Jan 27 15:15:04 crc kubenswrapper[4698]: I0127 15:15:04.306921 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bc9e36873107034cd8bd2bb4d453d0ffeb1a73e1a09e09305faf9ffe427a67" Jan 27 15:15:04 crc kubenswrapper[4698]: I0127 15:15:04.306928 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9" Jan 27 15:15:04 crc kubenswrapper[4698]: I0127 15:15:04.375389 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4"] Jan 27 15:15:04 crc kubenswrapper[4698]: I0127 15:15:04.386825 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-z2sp4"] Jan 27 15:15:05 crc kubenswrapper[4698]: I0127 15:15:05.063828 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff" path="/var/lib/kubelet/pods/4bc4ed9d-df5c-4eee-82d1-d8c68e7bb3ff/volumes" Jan 27 15:15:17 crc kubenswrapper[4698]: I0127 15:15:17.763367 4698 scope.go:117] "RemoveContainer" containerID="16fe7d071419be8091a09eea3ba007ffe94a3b20a3999fe4213f622b0287d995" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.443666 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:27 crc kubenswrapper[4698]: E0127 15:15:27.448945 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e3c825-fe5e-4a75-b0ce-7134bc91a87e" containerName="collect-profiles" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.448985 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e3c825-fe5e-4a75-b0ce-7134bc91a87e" containerName="collect-profiles" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.449226 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e3c825-fe5e-4a75-b0ce-7134bc91a87e" containerName="collect-profiles" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.451234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.452326 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.452392 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.457421 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.599417 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvvr\" (UniqueName: \"kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.599623 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.599697 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.701306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.701402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.701470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfvvr\" (UniqueName: \"kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.701978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.703420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.726435 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfvvr\" (UniqueName: \"kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr\") pod \"community-operators-b4nxf\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:27 crc kubenswrapper[4698]: I0127 15:15:27.792235 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:28 crc kubenswrapper[4698]: I0127 15:15:28.434622 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:28 crc kubenswrapper[4698]: I0127 15:15:28.537500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerStarted","Data":"b582ec9c169fce87222dc59facf7c2dfcf86999c4fccfd06367dcd3de3378f3d"} Jan 27 15:15:29 crc kubenswrapper[4698]: I0127 15:15:29.550127 4698 generic.go:334] "Generic (PLEG): container finished" podID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerID="48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba" exitCode=0 Jan 27 15:15:29 crc kubenswrapper[4698]: I0127 15:15:29.550279 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerDied","Data":"48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba"} Jan 27 15:15:29 crc kubenswrapper[4698]: I0127 15:15:29.553189 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:15:31 crc kubenswrapper[4698]: I0127 15:15:31.574435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerStarted","Data":"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3"} Jan 27 15:15:34 crc kubenswrapper[4698]: I0127 15:15:34.603256 4698 generic.go:334] "Generic (PLEG): container finished" podID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerID="9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3" exitCode=0 Jan 27 15:15:34 crc kubenswrapper[4698]: I0127 15:15:34.603345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerDied","Data":"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3"} Jan 27 15:15:36 crc kubenswrapper[4698]: I0127 15:15:36.633161 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerStarted","Data":"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c"} Jan 27 15:15:36 crc kubenswrapper[4698]: I0127 15:15:36.656896 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4nxf" podStartSLOduration=3.667012829 podStartE2EDuration="9.656876448s" podCreationTimestamp="2026-01-27 15:15:27 +0000 UTC" firstStartedPulling="2026-01-27 15:15:29.552862599 +0000 UTC m=+2785.229640054" lastFinishedPulling="2026-01-27 15:15:35.542726208 +0000 UTC m=+2791.219503673" observedRunningTime="2026-01-27 15:15:36.64901517 +0000 UTC m=+2792.325792655" watchObservedRunningTime="2026-01-27 15:15:36.656876448 +0000 UTC m=+2792.333653913" Jan 27 15:15:37 crc kubenswrapper[4698]: I0127 15:15:37.792703 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:37 crc kubenswrapper[4698]: I0127 15:15:37.793056 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:37 crc kubenswrapper[4698]: I0127 15:15:37.867591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:47 crc kubenswrapper[4698]: I0127 15:15:47.843202 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:47 crc kubenswrapper[4698]: I0127 15:15:47.899159 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:48 crc kubenswrapper[4698]: I0127 15:15:48.737399 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4nxf" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="registry-server" containerID="cri-o://f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c" gracePeriod=2 Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.264766 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.344704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfvvr\" (UniqueName: \"kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr\") pod \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.344827 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content\") pod \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.345010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities\") pod \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\" (UID: \"a5f349eb-148c-4a1e-ab29-e13ebf8cd802\") " Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.345986 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities" (OuterVolumeSpecName: "utilities") pod "a5f349eb-148c-4a1e-ab29-e13ebf8cd802" (UID: "a5f349eb-148c-4a1e-ab29-e13ebf8cd802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.350721 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr" (OuterVolumeSpecName: "kube-api-access-zfvvr") pod "a5f349eb-148c-4a1e-ab29-e13ebf8cd802" (UID: "a5f349eb-148c-4a1e-ab29-e13ebf8cd802"). InnerVolumeSpecName "kube-api-access-zfvvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.400929 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5f349eb-148c-4a1e-ab29-e13ebf8cd802" (UID: "a5f349eb-148c-4a1e-ab29-e13ebf8cd802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.447335 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.447375 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.447385 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfvvr\" (UniqueName: \"kubernetes.io/projected/a5f349eb-148c-4a1e-ab29-e13ebf8cd802-kube-api-access-zfvvr\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.756835 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4nxf" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.756780 4698 generic.go:334] "Generic (PLEG): container finished" podID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerID="f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c" exitCode=0 Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.756863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerDied","Data":"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c"} Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.757317 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4nxf" event={"ID":"a5f349eb-148c-4a1e-ab29-e13ebf8cd802","Type":"ContainerDied","Data":"b582ec9c169fce87222dc59facf7c2dfcf86999c4fccfd06367dcd3de3378f3d"} Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.757366 4698 scope.go:117] "RemoveContainer" containerID="f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.793383 4698 scope.go:117] "RemoveContainer" containerID="9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.793570 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.804463 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4nxf"] Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.816728 4698 scope.go:117] "RemoveContainer" containerID="48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.864384 4698 scope.go:117] "RemoveContainer" containerID="f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c" Jan 27 15:15:49 crc kubenswrapper[4698]: E0127 15:15:49.865099 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c\": container with ID starting with f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c not found: ID does not exist" containerID="f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.865166 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c"} err="failed to get container status \"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c\": rpc error: code = NotFound desc = could not find container \"f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c\": container with ID starting with f04e65619d2b3d52cb053cb099d8f818f77aaed0cab4c4e541e71aa68ec1779c not found: ID does not exist" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.865209 4698 scope.go:117] "RemoveContainer" containerID="9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3" Jan 27 15:15:49 crc kubenswrapper[4698]: E0127 15:15:49.865864 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3\": container with ID starting with 9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3 not found: ID does not exist" containerID="9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.865920 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3"} err="failed to get container status \"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3\": rpc error: code = NotFound desc = could not find container \"9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3\": container with ID starting with 9c7ffcd3d478d12c695c719ebbb1ed7daddf5fa423ae062f4453c5b4b75c8bf3 not found: ID does not exist" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.865961 4698 scope.go:117] "RemoveContainer" containerID="48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba" Jan 27 15:15:49 crc kubenswrapper[4698]: E0127 15:15:49.866362 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba\": container with ID starting with 48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba not found: ID does not exist" containerID="48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba" Jan 27 15:15:49 crc kubenswrapper[4698]: I0127 15:15:49.866398 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba"} err="failed to get container status \"48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba\": rpc error: code = NotFound desc = could not find container \"48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba\": container with ID starting with 48dd36cf30fc5a1ca0d290b17732a99a4818179347cfe2fc81c0758d04e07aba not found: ID does not exist" Jan 27 15:15:51 crc kubenswrapper[4698]: I0127 15:15:51.003367 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" path="/var/lib/kubelet/pods/a5f349eb-148c-4a1e-ab29-e13ebf8cd802/volumes" Jan 27 15:15:57 crc kubenswrapper[4698]: I0127 15:15:57.452094 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:15:57 crc kubenswrapper[4698]: I0127 15:15:57.452601 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:16:27 crc kubenswrapper[4698]: I0127 15:16:27.451963 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:16:27 crc kubenswrapper[4698]: I0127 15:16:27.452542 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:16:27 crc kubenswrapper[4698]: I0127 15:16:27.452597 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:16:27 crc kubenswrapper[4698]: I0127 15:16:27.453791 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:16:27 crc kubenswrapper[4698]: I0127 15:16:27.453860 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca" gracePeriod=600 Jan 27 15:16:28 crc kubenswrapper[4698]: I0127 15:16:28.124452 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca" exitCode=0 Jan 27 15:16:28 crc kubenswrapper[4698]: I0127 15:16:28.124732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca"} Jan 27 15:16:28 crc kubenswrapper[4698]: I0127 15:16:28.125249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507"} Jan 27 15:16:28 crc kubenswrapper[4698]: I0127 15:16:28.125330 4698 scope.go:117] "RemoveContainer" containerID="5c1ae6b1cb3d3c05ff1607e2f7102b1797b674db87b62b13ac1c5e96a538f6b8" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.511734 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:21 crc kubenswrapper[4698]: E0127 15:17:21.512791 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="extract-utilities" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.512811 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="extract-utilities" Jan 27 15:17:21 crc kubenswrapper[4698]: E0127 15:17:21.512822 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="extract-content" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.512830 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="extract-content" Jan 27 15:17:21 crc kubenswrapper[4698]: E0127 15:17:21.512846 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="registry-server" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.512852 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="registry-server" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.513061 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f349eb-148c-4a1e-ab29-e13ebf8cd802" containerName="registry-server" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.523435 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.539312 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.645232 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.645330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.645356 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdv4\" (UniqueName: \"kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.747537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.747708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.747745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdv4\" (UniqueName: \"kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.749182 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.749291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.783182 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdv4\" (UniqueName: \"kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4\") pod \"redhat-marketplace-6jxgx\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:21 crc kubenswrapper[4698]: I0127 15:17:21.874821 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:22 crc kubenswrapper[4698]: I0127 15:17:22.367286 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:22 crc kubenswrapper[4698]: I0127 15:17:22.767358 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerID="cad4ba30e493661cfa8046f2757c053ef382446d37cc53cc9b34d2cc5c844e28" exitCode=0 Jan 27 15:17:22 crc kubenswrapper[4698]: I0127 15:17:22.767430 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerDied","Data":"cad4ba30e493661cfa8046f2757c053ef382446d37cc53cc9b34d2cc5c844e28"} Jan 27 15:17:22 crc kubenswrapper[4698]: I0127 15:17:22.767459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerStarted","Data":"76fa99c0facb18ca21e80108da8a526c7f35c83a920b22d8fe4b78c47cc94579"} Jan 27 15:17:24 crc kubenswrapper[4698]: I0127 15:17:24.805970 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerID="de6b0ebae7ca8932ad787fcf5b2839a2fadbe326c50598e662e0b9218f2f7fc2" exitCode=0 Jan 27 15:17:24 crc kubenswrapper[4698]: I0127 15:17:24.806123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerDied","Data":"de6b0ebae7ca8932ad787fcf5b2839a2fadbe326c50598e662e0b9218f2f7fc2"} Jan 27 15:17:25 crc kubenswrapper[4698]: I0127 15:17:25.820610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerStarted","Data":"08c26d28a16d0417ef35bbd6e04565e699ceba2dd4fd966cbc8db589676b00c3"} Jan 27 15:17:25 crc kubenswrapper[4698]: I0127 15:17:25.842797 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6jxgx" podStartSLOduration=2.294998716 podStartE2EDuration="4.842764731s" podCreationTimestamp="2026-01-27 15:17:21 +0000 UTC" firstStartedPulling="2026-01-27 15:17:22.769538512 +0000 UTC m=+2898.446315977" lastFinishedPulling="2026-01-27 15:17:25.317304527 +0000 UTC m=+2900.994081992" observedRunningTime="2026-01-27 15:17:25.83854907 +0000 UTC m=+2901.515326565" watchObservedRunningTime="2026-01-27 15:17:25.842764731 +0000 UTC m=+2901.519542196" Jan 27 15:17:31 crc kubenswrapper[4698]: I0127 15:17:31.875224 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:31 crc kubenswrapper[4698]: I0127 15:17:31.875658 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:31 crc kubenswrapper[4698]: I0127 15:17:31.928437 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:32 crc kubenswrapper[4698]: I0127 15:17:32.940796 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:33 crc kubenswrapper[4698]: I0127 15:17:33.008760 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:34 crc kubenswrapper[4698]: I0127 15:17:34.913514 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6jxgx" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="registry-server" containerID="cri-o://08c26d28a16d0417ef35bbd6e04565e699ceba2dd4fd966cbc8db589676b00c3" gracePeriod=2 Jan 27 15:17:36 crc kubenswrapper[4698]: I0127 15:17:36.934946 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerID="08c26d28a16d0417ef35bbd6e04565e699ceba2dd4fd966cbc8db589676b00c3" exitCode=0 Jan 27 15:17:36 crc kubenswrapper[4698]: I0127 15:17:36.935165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerDied","Data":"08c26d28a16d0417ef35bbd6e04565e699ceba2dd4fd966cbc8db589676b00c3"} Jan 27 15:17:36 crc kubenswrapper[4698]: I0127 15:17:36.936877 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxgx" event={"ID":"97d98bc2-d97c-4a52-bced-c98e8923c828","Type":"ContainerDied","Data":"76fa99c0facb18ca21e80108da8a526c7f35c83a920b22d8fe4b78c47cc94579"} Jan 27 15:17:36 crc kubenswrapper[4698]: I0127 15:17:36.936958 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76fa99c0facb18ca21e80108da8a526c7f35c83a920b22d8fe4b78c47cc94579" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.022380 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.204238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities\") pod \"97d98bc2-d97c-4a52-bced-c98e8923c828\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.204318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbdv4\" (UniqueName: \"kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4\") pod \"97d98bc2-d97c-4a52-bced-c98e8923c828\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.204587 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content\") pod \"97d98bc2-d97c-4a52-bced-c98e8923c828\" (UID: \"97d98bc2-d97c-4a52-bced-c98e8923c828\") " Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.205383 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities" (OuterVolumeSpecName: "utilities") pod "97d98bc2-d97c-4a52-bced-c98e8923c828" (UID: "97d98bc2-d97c-4a52-bced-c98e8923c828"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.213039 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4" (OuterVolumeSpecName: "kube-api-access-lbdv4") pod "97d98bc2-d97c-4a52-bced-c98e8923c828" (UID: "97d98bc2-d97c-4a52-bced-c98e8923c828"). InnerVolumeSpecName "kube-api-access-lbdv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.228811 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97d98bc2-d97c-4a52-bced-c98e8923c828" (UID: "97d98bc2-d97c-4a52-bced-c98e8923c828"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.307240 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.307584 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d98bc2-d97c-4a52-bced-c98e8923c828-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.307597 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbdv4\" (UniqueName: \"kubernetes.io/projected/97d98bc2-d97c-4a52-bced-c98e8923c828-kube-api-access-lbdv4\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.945584 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxgx" Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.985528 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:37 crc kubenswrapper[4698]: I0127 15:17:37.999858 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxgx"] Jan 27 15:17:39 crc kubenswrapper[4698]: I0127 15:17:39.006862 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" path="/var/lib/kubelet/pods/97d98bc2-d97c-4a52-bced-c98e8923c828/volumes" Jan 27 15:18:27 crc kubenswrapper[4698]: I0127 15:18:27.452071 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:18:27 crc kubenswrapper[4698]: I0127 15:18:27.452611 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:18:57 crc kubenswrapper[4698]: I0127 15:18:57.452606 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:18:57 crc kubenswrapper[4698]: I0127 15:18:57.453231 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:19:27 crc kubenswrapper[4698]: I0127 15:19:27.452197 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:19:27 crc kubenswrapper[4698]: I0127 15:19:27.454022 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:19:27 crc kubenswrapper[4698]: I0127 15:19:27.454150 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:19:27 crc kubenswrapper[4698]: I0127 15:19:27.455374 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:19:27 crc kubenswrapper[4698]: I0127 15:19:27.455444 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" gracePeriod=600 Jan 27 15:19:27 crc kubenswrapper[4698]: E0127 15:19:27.590208 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:19:28 crc kubenswrapper[4698]: I0127 15:19:28.063290 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" exitCode=0 Jan 27 15:19:28 crc kubenswrapper[4698]: I0127 15:19:28.063375 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507"} Jan 27 15:19:28 crc kubenswrapper[4698]: I0127 15:19:28.063415 4698 scope.go:117] "RemoveContainer" containerID="7eff6a6e737295ad2ff376f63f70c687e9126403e6876a676d534315663a38ca" Jan 27 15:19:28 crc kubenswrapper[4698]: I0127 15:19:28.064474 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:19:28 crc kubenswrapper[4698]: E0127 15:19:28.064800 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:19:38 crc kubenswrapper[4698]: I0127 15:19:38.993589 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:19:38 crc kubenswrapper[4698]: E0127 15:19:38.994361 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:19:53 crc kubenswrapper[4698]: I0127 15:19:53.993027 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:19:53 crc kubenswrapper[4698]: E0127 15:19:53.993950 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:07 crc kubenswrapper[4698]: I0127 15:20:07.993082 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:20:07 crc kubenswrapper[4698]: E0127 15:20:07.994034 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:19 crc kubenswrapper[4698]: I0127 15:20:19.992622 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:20:19 crc kubenswrapper[4698]: E0127 15:20:19.993593 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:32 crc kubenswrapper[4698]: I0127 15:20:32.992692 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:20:32 crc kubenswrapper[4698]: E0127 15:20:32.993653 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.469609 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:37 crc kubenswrapper[4698]: E0127 15:20:37.471307 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="registry-server" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.471325 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="registry-server" Jan 27 15:20:37 crc kubenswrapper[4698]: E0127 15:20:37.471370 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="extract-content" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.471377 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="extract-content" Jan 27 15:20:37 crc kubenswrapper[4698]: E0127 15:20:37.471408 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="extract-utilities" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.471415 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="extract-utilities" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.471672 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d98bc2-d97c-4a52-bced-c98e8923c828" containerName="registry-server" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.473712 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.480296 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.501112 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.501173 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.501384 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wxx\" (UniqueName: \"kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.602981 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8wxx\" (UniqueName: \"kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.603067 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.603104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.603620 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.603672 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.635572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8wxx\" (UniqueName: \"kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx\") pod \"redhat-operators-phn9l\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:37 crc kubenswrapper[4698]: I0127 15:20:37.809364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:38 crc kubenswrapper[4698]: I0127 15:20:38.299273 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:38 crc kubenswrapper[4698]: W0127 15:20:38.311596 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef385339_a8d9_40de_a928_a82b5c7d014b.slice/crio-a45131bfaacd5aec3598caed8128f7226e433e4bc3473be1e5ce65650d8346e4 WatchSource:0}: Error finding container a45131bfaacd5aec3598caed8128f7226e433e4bc3473be1e5ce65650d8346e4: Status 404 returned error can't find the container with id a45131bfaacd5aec3598caed8128f7226e433e4bc3473be1e5ce65650d8346e4 Jan 27 15:20:38 crc kubenswrapper[4698]: I0127 15:20:38.705430 4698 generic.go:334] "Generic (PLEG): container finished" podID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerID="da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d" exitCode=0 Jan 27 15:20:38 crc kubenswrapper[4698]: I0127 15:20:38.705480 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerDied","Data":"da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d"} Jan 27 15:20:38 crc kubenswrapper[4698]: I0127 15:20:38.705527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerStarted","Data":"a45131bfaacd5aec3598caed8128f7226e433e4bc3473be1e5ce65650d8346e4"} Jan 27 15:20:38 crc kubenswrapper[4698]: I0127 15:20:38.707846 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:20:40 crc kubenswrapper[4698]: I0127 15:20:40.724575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerStarted","Data":"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220"} Jan 27 15:20:41 crc kubenswrapper[4698]: I0127 15:20:41.734093 4698 generic.go:334] "Generic (PLEG): container finished" podID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerID="69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220" exitCode=0 Jan 27 15:20:41 crc kubenswrapper[4698]: I0127 15:20:41.734213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerDied","Data":"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220"} Jan 27 15:20:43 crc kubenswrapper[4698]: I0127 15:20:43.755558 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerStarted","Data":"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e"} Jan 27 15:20:43 crc kubenswrapper[4698]: I0127 15:20:43.779298 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-phn9l" podStartSLOduration=2.774050927 podStartE2EDuration="6.779274143s" podCreationTimestamp="2026-01-27 15:20:37 +0000 UTC" firstStartedPulling="2026-01-27 15:20:38.707527128 +0000 UTC m=+3094.384304603" lastFinishedPulling="2026-01-27 15:20:42.712750354 +0000 UTC m=+3098.389527819" observedRunningTime="2026-01-27 15:20:43.773590832 +0000 UTC m=+3099.450368307" watchObservedRunningTime="2026-01-27 15:20:43.779274143 +0000 UTC m=+3099.456051598" Jan 27 15:20:46 crc kubenswrapper[4698]: I0127 15:20:46.997260 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:20:46 crc kubenswrapper[4698]: E0127 15:20:46.997908 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.221873 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.224017 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.235998 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.325214 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.325760 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.325802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5mc\" (UniqueName: \"kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.428188 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.428273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5mc\" (UniqueName: \"kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.428371 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.429798 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.429867 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.450620 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5mc\" (UniqueName: \"kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc\") pod \"certified-operators-4vwjp\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.544414 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.809933 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:47 crc kubenswrapper[4698]: I0127 15:20:47.810257 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:48 crc kubenswrapper[4698]: I0127 15:20:48.078828 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:20:48 crc kubenswrapper[4698]: I0127 15:20:48.800852 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerStarted","Data":"4a863edbf78b94a737a6bffb7cdea7bc890e791152b3001106999750d71efad5"} Jan 27 15:20:48 crc kubenswrapper[4698]: I0127 15:20:48.879575 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phn9l" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="registry-server" probeResult="failure" output=< Jan 27 15:20:48 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:20:48 crc kubenswrapper[4698]: > Jan 27 15:20:49 crc kubenswrapper[4698]: I0127 15:20:49.813356 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerID="df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7" exitCode=0 Jan 27 15:20:49 crc kubenswrapper[4698]: I0127 15:20:49.813525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerDied","Data":"df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7"} Jan 27 15:20:51 crc kubenswrapper[4698]: I0127 15:20:51.833747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerStarted","Data":"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd"} Jan 27 15:20:55 crc kubenswrapper[4698]: I0127 15:20:55.871340 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerID="aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd" exitCode=0 Jan 27 15:20:55 crc kubenswrapper[4698]: I0127 15:20:55.871471 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerDied","Data":"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd"} Jan 27 15:20:56 crc kubenswrapper[4698]: I0127 15:20:56.885869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerStarted","Data":"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0"} Jan 27 15:20:56 crc kubenswrapper[4698]: I0127 15:20:56.915482 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4vwjp" podStartSLOduration=3.303581233 podStartE2EDuration="9.91545814s" podCreationTimestamp="2026-01-27 15:20:47 +0000 UTC" firstStartedPulling="2026-01-27 15:20:49.816279289 +0000 UTC m=+3105.493056754" lastFinishedPulling="2026-01-27 15:20:56.428156196 +0000 UTC m=+3112.104933661" observedRunningTime="2026-01-27 15:20:56.908053074 +0000 UTC m=+3112.584830559" watchObservedRunningTime="2026-01-27 15:20:56.91545814 +0000 UTC m=+3112.592235615" Jan 27 15:20:57 crc kubenswrapper[4698]: I0127 15:20:57.545111 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:57 crc kubenswrapper[4698]: I0127 15:20:57.545477 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:20:57 crc kubenswrapper[4698]: I0127 15:20:57.861491 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:57 crc kubenswrapper[4698]: I0127 15:20:57.915164 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:58 crc kubenswrapper[4698]: I0127 15:20:58.589988 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4vwjp" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="registry-server" probeResult="failure" output=< Jan 27 15:20:58 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:20:58 crc kubenswrapper[4698]: > Jan 27 15:20:58 crc kubenswrapper[4698]: I0127 15:20:58.992309 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:20:58 crc kubenswrapper[4698]: E0127 15:20:58.992663 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.013725 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.013956 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-phn9l" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="registry-server" containerID="cri-o://eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e" gracePeriod=2 Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.493361 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.595972 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities\") pod \"ef385339-a8d9-40de-a928-a82b5c7d014b\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.596195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content\") pod \"ef385339-a8d9-40de-a928-a82b5c7d014b\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.596245 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8wxx\" (UniqueName: \"kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx\") pod \"ef385339-a8d9-40de-a928-a82b5c7d014b\" (UID: \"ef385339-a8d9-40de-a928-a82b5c7d014b\") " Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.597286 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities" (OuterVolumeSpecName: "utilities") pod "ef385339-a8d9-40de-a928-a82b5c7d014b" (UID: "ef385339-a8d9-40de-a928-a82b5c7d014b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.602753 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx" (OuterVolumeSpecName: "kube-api-access-z8wxx") pod "ef385339-a8d9-40de-a928-a82b5c7d014b" (UID: "ef385339-a8d9-40de-a928-a82b5c7d014b"). InnerVolumeSpecName "kube-api-access-z8wxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.698293 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.698340 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8wxx\" (UniqueName: \"kubernetes.io/projected/ef385339-a8d9-40de-a928-a82b5c7d014b-kube-api-access-z8wxx\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.710206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef385339-a8d9-40de-a928-a82b5c7d014b" (UID: "ef385339-a8d9-40de-a928-a82b5c7d014b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.799989 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385339-a8d9-40de-a928-a82b5c7d014b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.914528 4698 generic.go:334] "Generic (PLEG): container finished" podID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerID="eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e" exitCode=0 Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.914670 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phn9l" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.914614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerDied","Data":"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e"} Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.914728 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phn9l" event={"ID":"ef385339-a8d9-40de-a928-a82b5c7d014b","Type":"ContainerDied","Data":"a45131bfaacd5aec3598caed8128f7226e433e4bc3473be1e5ce65650d8346e4"} Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.914748 4698 scope.go:117] "RemoveContainer" containerID="eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.934087 4698 scope.go:117] "RemoveContainer" containerID="69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220" Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.953194 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.961388 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-phn9l"] Jan 27 15:20:59 crc kubenswrapper[4698]: I0127 15:20:59.974089 4698 scope.go:117] "RemoveContainer" containerID="da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.012379 4698 scope.go:117] "RemoveContainer" containerID="eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e" Jan 27 15:21:00 crc kubenswrapper[4698]: E0127 15:21:00.012770 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e\": container with ID starting with eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e not found: ID does not exist" containerID="eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.012803 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e"} err="failed to get container status \"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e\": rpc error: code = NotFound desc = could not find container \"eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e\": container with ID starting with eee34c80ef4635ba2e137d1444a3fe84b9e677e34f558f0efffde4f60eac0a1e not found: ID does not exist" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.012828 4698 scope.go:117] "RemoveContainer" containerID="69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220" Jan 27 15:21:00 crc kubenswrapper[4698]: E0127 15:21:00.013057 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220\": container with ID starting with 69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220 not found: ID does not exist" containerID="69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.013084 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220"} err="failed to get container status \"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220\": rpc error: code = NotFound desc = could not find container \"69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220\": container with ID starting with 69755e68bd5ab074018d20cb26360caf2fe8cdfa55c1961739661f7c84aee220 not found: ID does not exist" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.013101 4698 scope.go:117] "RemoveContainer" containerID="da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d" Jan 27 15:21:00 crc kubenswrapper[4698]: E0127 15:21:00.013351 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d\": container with ID starting with da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d not found: ID does not exist" containerID="da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d" Jan 27 15:21:00 crc kubenswrapper[4698]: I0127 15:21:00.013374 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d"} err="failed to get container status \"da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d\": rpc error: code = NotFound desc = could not find container \"da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d\": container with ID starting with da46c35ec0c56f491ab4982cd410a9f2c1ea7d55a2f89e845a33506cd4d5359d not found: ID does not exist" Jan 27 15:21:01 crc kubenswrapper[4698]: I0127 15:21:01.004940 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" path="/var/lib/kubelet/pods/ef385339-a8d9-40de-a928-a82b5c7d014b/volumes" Jan 27 15:21:07 crc kubenswrapper[4698]: I0127 15:21:07.590428 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:21:07 crc kubenswrapper[4698]: I0127 15:21:07.647325 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:21:08 crc kubenswrapper[4698]: I0127 15:21:08.660912 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:21:08 crc kubenswrapper[4698]: I0127 15:21:08.992497 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4vwjp" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="registry-server" containerID="cri-o://b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0" gracePeriod=2 Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.427743 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.604789 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content\") pod \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.604914 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities\") pod \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.604959 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px5mc\" (UniqueName: \"kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc\") pod \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\" (UID: \"aa2f397b-a5ac-44a6-a871-40721a4a4d47\") " Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.606077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities" (OuterVolumeSpecName: "utilities") pod "aa2f397b-a5ac-44a6-a871-40721a4a4d47" (UID: "aa2f397b-a5ac-44a6-a871-40721a4a4d47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.611749 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc" (OuterVolumeSpecName: "kube-api-access-px5mc") pod "aa2f397b-a5ac-44a6-a871-40721a4a4d47" (UID: "aa2f397b-a5ac-44a6-a871-40721a4a4d47"). InnerVolumeSpecName "kube-api-access-px5mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.649880 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa2f397b-a5ac-44a6-a871-40721a4a4d47" (UID: "aa2f397b-a5ac-44a6-a871-40721a4a4d47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.706945 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.706980 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2f397b-a5ac-44a6-a871-40721a4a4d47-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:09 crc kubenswrapper[4698]: I0127 15:21:09.706989 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px5mc\" (UniqueName: \"kubernetes.io/projected/aa2f397b-a5ac-44a6-a871-40721a4a4d47-kube-api-access-px5mc\") on node \"crc\" DevicePath \"\"" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.004711 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerID="b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0" exitCode=0 Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.004766 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vwjp" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.004771 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerDied","Data":"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0"} Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.004842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vwjp" event={"ID":"aa2f397b-a5ac-44a6-a871-40721a4a4d47","Type":"ContainerDied","Data":"4a863edbf78b94a737a6bffb7cdea7bc890e791152b3001106999750d71efad5"} Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.004861 4698 scope.go:117] "RemoveContainer" containerID="b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.028938 4698 scope.go:117] "RemoveContainer" containerID="aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.050261 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.060124 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4vwjp"] Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.067740 4698 scope.go:117] "RemoveContainer" containerID="df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.105862 4698 scope.go:117] "RemoveContainer" containerID="b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0" Jan 27 15:21:10 crc kubenswrapper[4698]: E0127 15:21:10.106451 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0\": container with ID starting with b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0 not found: ID does not exist" containerID="b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.106509 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0"} err="failed to get container status \"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0\": rpc error: code = NotFound desc = could not find container \"b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0\": container with ID starting with b059ccef07b5c26ccc880e99fbe958022c5e7b83decf0c38fdb0c604b77b48a0 not found: ID does not exist" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.106540 4698 scope.go:117] "RemoveContainer" containerID="aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd" Jan 27 15:21:10 crc kubenswrapper[4698]: E0127 15:21:10.106951 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd\": container with ID starting with aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd not found: ID does not exist" containerID="aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.106988 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd"} err="failed to get container status \"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd\": rpc error: code = NotFound desc = could not find container \"aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd\": container with ID starting with aaa4edbbd6c49a01e07f2aff45f51650e26ba85fc6abaf78c9284e06b28f33cd not found: ID does not exist" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.107011 4698 scope.go:117] "RemoveContainer" containerID="df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7" Jan 27 15:21:10 crc kubenswrapper[4698]: E0127 15:21:10.107552 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7\": container with ID starting with df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7 not found: ID does not exist" containerID="df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7" Jan 27 15:21:10 crc kubenswrapper[4698]: I0127 15:21:10.107587 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7"} err="failed to get container status \"df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7\": rpc error: code = NotFound desc = could not find container \"df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7\": container with ID starting with df1966b25e641257bd127b1206a7d789418bace2a7e5556c787cd63d4c4e01a7 not found: ID does not exist" Jan 27 15:21:11 crc kubenswrapper[4698]: I0127 15:21:11.004302 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" path="/var/lib/kubelet/pods/aa2f397b-a5ac-44a6-a871-40721a4a4d47/volumes" Jan 27 15:21:11 crc kubenswrapper[4698]: I0127 15:21:11.992286 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:21:11 crc kubenswrapper[4698]: E0127 15:21:11.992560 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:21:26 crc kubenswrapper[4698]: I0127 15:21:26.991675 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:21:26 crc kubenswrapper[4698]: E0127 15:21:26.992389 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:21:41 crc kubenswrapper[4698]: I0127 15:21:41.993234 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:21:41 crc kubenswrapper[4698]: E0127 15:21:41.994132 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:21:56 crc kubenswrapper[4698]: I0127 15:21:56.991743 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:21:56 crc kubenswrapper[4698]: E0127 15:21:56.992619 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:22:10 crc kubenswrapper[4698]: I0127 15:22:10.993885 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:22:10 crc kubenswrapper[4698]: E0127 15:22:10.994715 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:22:25 crc kubenswrapper[4698]: I0127 15:22:24.999446 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:22:25 crc kubenswrapper[4698]: E0127 15:22:25.000452 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:22:38 crc kubenswrapper[4698]: I0127 15:22:38.992915 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:22:38 crc kubenswrapper[4698]: E0127 15:22:38.993672 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:22:49 crc kubenswrapper[4698]: I0127 15:22:49.992777 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:22:49 crc kubenswrapper[4698]: E0127 15:22:49.993713 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:23:03 crc kubenswrapper[4698]: I0127 15:23:03.992831 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:23:03 crc kubenswrapper[4698]: E0127 15:23:03.993695 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:23:15 crc kubenswrapper[4698]: I0127 15:23:15.000296 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:23:15 crc kubenswrapper[4698]: E0127 15:23:15.001157 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:23:26 crc kubenswrapper[4698]: I0127 15:23:26.991912 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:23:26 crc kubenswrapper[4698]: E0127 15:23:26.992693 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:23:40 crc kubenswrapper[4698]: I0127 15:23:40.993333 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:23:40 crc kubenswrapper[4698]: E0127 15:23:40.997776 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:23:51 crc kubenswrapper[4698]: I0127 15:23:51.993825 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:23:51 crc kubenswrapper[4698]: E0127 15:23:51.995243 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:24:05 crc kubenswrapper[4698]: I0127 15:24:05.992793 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:24:05 crc kubenswrapper[4698]: E0127 15:24:05.993697 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:24:18 crc kubenswrapper[4698]: I0127 15:24:18.042409 4698 scope.go:117] "RemoveContainer" containerID="de6b0ebae7ca8932ad787fcf5b2839a2fadbe326c50598e662e0b9218f2f7fc2" Jan 27 15:24:18 crc kubenswrapper[4698]: I0127 15:24:18.063596 4698 scope.go:117] "RemoveContainer" containerID="cad4ba30e493661cfa8046f2757c053ef382446d37cc53cc9b34d2cc5c844e28" Jan 27 15:24:18 crc kubenswrapper[4698]: I0127 15:24:18.118785 4698 scope.go:117] "RemoveContainer" containerID="08c26d28a16d0417ef35bbd6e04565e699ceba2dd4fd966cbc8db589676b00c3" Jan 27 15:24:20 crc kubenswrapper[4698]: I0127 15:24:20.992162 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:24:20 crc kubenswrapper[4698]: E0127 15:24:20.992758 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:24:32 crc kubenswrapper[4698]: I0127 15:24:32.992501 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:24:33 crc kubenswrapper[4698]: I0127 15:24:33.843863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671"} Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.190582 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191649 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191664 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191697 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="extract-content" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191704 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="extract-content" Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191720 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="extract-utilities" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191729 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="extract-utilities" Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191742 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="extract-utilities" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191749 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="extract-utilities" Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191760 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="extract-content" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191768 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="extract-content" Jan 27 15:26:29 crc kubenswrapper[4698]: E0127 15:26:29.191788 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191795 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.191985 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2f397b-a5ac-44a6-a871-40721a4a4d47" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.192020 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef385339-a8d9-40de-a928-a82b5c7d014b" containerName="registry-server" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.193800 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.205523 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.376281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fmj\" (UniqueName: \"kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.376484 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.376842 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.478623 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.478769 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.478818 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fmj\" (UniqueName: \"kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.479585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.479756 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.513788 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fmj\" (UniqueName: \"kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj\") pod \"community-operators-rldtx\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:29 crc kubenswrapper[4698]: I0127 15:26:29.531201 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:30 crc kubenswrapper[4698]: I0127 15:26:30.150967 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:30 crc kubenswrapper[4698]: I0127 15:26:30.936797 4698 generic.go:334] "Generic (PLEG): container finished" podID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerID="636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2" exitCode=0 Jan 27 15:26:30 crc kubenswrapper[4698]: I0127 15:26:30.937167 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerDied","Data":"636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2"} Jan 27 15:26:30 crc kubenswrapper[4698]: I0127 15:26:30.937376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerStarted","Data":"4171731b96059f601e886d0380abb78b1b239c6e07e15ea2c6fab48e18bf901c"} Jan 27 15:26:30 crc kubenswrapper[4698]: I0127 15:26:30.941880 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:26:31 crc kubenswrapper[4698]: I0127 15:26:31.951565 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerStarted","Data":"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7"} Jan 27 15:26:32 crc kubenswrapper[4698]: I0127 15:26:32.963154 4698 generic.go:334] "Generic (PLEG): container finished" podID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerID="0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7" exitCode=0 Jan 27 15:26:32 crc kubenswrapper[4698]: I0127 15:26:32.963236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerDied","Data":"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7"} Jan 27 15:26:33 crc kubenswrapper[4698]: I0127 15:26:33.975135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerStarted","Data":"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab"} Jan 27 15:26:34 crc kubenswrapper[4698]: I0127 15:26:34.012765 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rldtx" podStartSLOduration=2.544060203 podStartE2EDuration="5.012717753s" podCreationTimestamp="2026-01-27 15:26:29 +0000 UTC" firstStartedPulling="2026-01-27 15:26:30.941126365 +0000 UTC m=+3446.617903830" lastFinishedPulling="2026-01-27 15:26:33.409783915 +0000 UTC m=+3449.086561380" observedRunningTime="2026-01-27 15:26:33.99931182 +0000 UTC m=+3449.676089285" watchObservedRunningTime="2026-01-27 15:26:34.012717753 +0000 UTC m=+3449.689495238" Jan 27 15:26:39 crc kubenswrapper[4698]: I0127 15:26:39.532302 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:39 crc kubenswrapper[4698]: I0127 15:26:39.532682 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:39 crc kubenswrapper[4698]: I0127 15:26:39.579359 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:40 crc kubenswrapper[4698]: I0127 15:26:40.072085 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:40 crc kubenswrapper[4698]: I0127 15:26:40.133568 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.043538 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rldtx" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="registry-server" containerID="cri-o://33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab" gracePeriod=2 Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.601313 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.707437 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities\") pod \"34959216-61cc-4fa9-aaba-f5a72255ffd1\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.707527 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content\") pod \"34959216-61cc-4fa9-aaba-f5a72255ffd1\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.707568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fmj\" (UniqueName: \"kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj\") pod \"34959216-61cc-4fa9-aaba-f5a72255ffd1\" (UID: \"34959216-61cc-4fa9-aaba-f5a72255ffd1\") " Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.708534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities" (OuterVolumeSpecName: "utilities") pod "34959216-61cc-4fa9-aaba-f5a72255ffd1" (UID: "34959216-61cc-4fa9-aaba-f5a72255ffd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.715111 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj" (OuterVolumeSpecName: "kube-api-access-p5fmj") pod "34959216-61cc-4fa9-aaba-f5a72255ffd1" (UID: "34959216-61cc-4fa9-aaba-f5a72255ffd1"). InnerVolumeSpecName "kube-api-access-p5fmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.810212 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:42 crc kubenswrapper[4698]: I0127 15:26:42.810252 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fmj\" (UniqueName: \"kubernetes.io/projected/34959216-61cc-4fa9-aaba-f5a72255ffd1-kube-api-access-p5fmj\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.057829 4698 generic.go:334] "Generic (PLEG): container finished" podID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerID="33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab" exitCode=0 Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.057919 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerDied","Data":"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab"} Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.059037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rldtx" event={"ID":"34959216-61cc-4fa9-aaba-f5a72255ffd1","Type":"ContainerDied","Data":"4171731b96059f601e886d0380abb78b1b239c6e07e15ea2c6fab48e18bf901c"} Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.059069 4698 scope.go:117] "RemoveContainer" containerID="33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.057949 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rldtx" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.080717 4698 scope.go:117] "RemoveContainer" containerID="0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.106251 4698 scope.go:117] "RemoveContainer" containerID="636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.160212 4698 scope.go:117] "RemoveContainer" containerID="33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab" Jan 27 15:26:43 crc kubenswrapper[4698]: E0127 15:26:43.160718 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab\": container with ID starting with 33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab not found: ID does not exist" containerID="33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.160760 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab"} err="failed to get container status \"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab\": rpc error: code = NotFound desc = could not find container \"33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab\": container with ID starting with 33530522fd4242cbc8f0ede2dd4901102a8afa3e0c0dfad5ebaa95a8927681ab not found: ID does not exist" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.160785 4698 scope.go:117] "RemoveContainer" containerID="0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7" Jan 27 15:26:43 crc kubenswrapper[4698]: E0127 15:26:43.161042 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7\": container with ID starting with 0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7 not found: ID does not exist" containerID="0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.161080 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7"} err="failed to get container status \"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7\": rpc error: code = NotFound desc = could not find container \"0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7\": container with ID starting with 0f7747bf213a54a6733c856e012af109275c6c04bd0376fcbd5795630fc1dfa7 not found: ID does not exist" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.161102 4698 scope.go:117] "RemoveContainer" containerID="636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2" Jan 27 15:26:43 crc kubenswrapper[4698]: E0127 15:26:43.161360 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2\": container with ID starting with 636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2 not found: ID does not exist" containerID="636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.161397 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2"} err="failed to get container status \"636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2\": rpc error: code = NotFound desc = could not find container \"636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2\": container with ID starting with 636094de85cd01719ecebde8e7e0630c103299712094bbf2e4ae320855a11ab2 not found: ID does not exist" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.878357 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34959216-61cc-4fa9-aaba-f5a72255ffd1" (UID: "34959216-61cc-4fa9-aaba-f5a72255ffd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:43 crc kubenswrapper[4698]: I0127 15:26:43.936658 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34959216-61cc-4fa9-aaba-f5a72255ffd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:44 crc kubenswrapper[4698]: I0127 15:26:44.001018 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:44 crc kubenswrapper[4698]: I0127 15:26:44.011446 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rldtx"] Jan 27 15:26:45 crc kubenswrapper[4698]: I0127 15:26:45.003931 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" path="/var/lib/kubelet/pods/34959216-61cc-4fa9-aaba-f5a72255ffd1/volumes" Jan 27 15:26:57 crc kubenswrapper[4698]: I0127 15:26:57.452811 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:26:57 crc kubenswrapper[4698]: I0127 15:26:57.454028 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:27 crc kubenswrapper[4698]: I0127 15:27:27.453355 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:27:27 crc kubenswrapper[4698]: I0127 15:27:27.454284 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.065896 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:31 crc kubenswrapper[4698]: E0127 15:27:31.066999 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="extract-content" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.067018 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="extract-content" Jan 27 15:27:31 crc kubenswrapper[4698]: E0127 15:27:31.067039 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="registry-server" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.067046 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="registry-server" Jan 27 15:27:31 crc kubenswrapper[4698]: E0127 15:27:31.067086 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="extract-utilities" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.067098 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="extract-utilities" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.067345 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="34959216-61cc-4fa9-aaba-f5a72255ffd1" containerName="registry-server" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.069200 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.080630 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.141295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.141353 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.141394 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdj2\" (UniqueName: \"kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.243383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.243468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.243524 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhdj2\" (UniqueName: \"kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.243996 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.244046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.266310 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhdj2\" (UniqueName: \"kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2\") pod \"redhat-marketplace-mtz5b\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.392181 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:31 crc kubenswrapper[4698]: I0127 15:27:31.861926 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:32 crc kubenswrapper[4698]: I0127 15:27:32.149502 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerID="915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4" exitCode=0 Jan 27 15:27:32 crc kubenswrapper[4698]: I0127 15:27:32.149555 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerDied","Data":"915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4"} Jan 27 15:27:32 crc kubenswrapper[4698]: I0127 15:27:32.149595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerStarted","Data":"fd1a9d89236de86a10cbcec364dc921ef93d4cc4f436c7e523a7b2f6807f116d"} Jan 27 15:27:33 crc kubenswrapper[4698]: I0127 15:27:33.162732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerStarted","Data":"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376"} Jan 27 15:27:34 crc kubenswrapper[4698]: I0127 15:27:34.173051 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerID="2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376" exitCode=0 Jan 27 15:27:34 crc kubenswrapper[4698]: I0127 15:27:34.173107 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerDied","Data":"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376"} Jan 27 15:27:35 crc kubenswrapper[4698]: I0127 15:27:35.183585 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerStarted","Data":"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee"} Jan 27 15:27:35 crc kubenswrapper[4698]: I0127 15:27:35.202671 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mtz5b" podStartSLOduration=1.531345654 podStartE2EDuration="4.202630911s" podCreationTimestamp="2026-01-27 15:27:31 +0000 UTC" firstStartedPulling="2026-01-27 15:27:32.15184063 +0000 UTC m=+3507.828618095" lastFinishedPulling="2026-01-27 15:27:34.823125887 +0000 UTC m=+3510.499903352" observedRunningTime="2026-01-27 15:27:35.199464908 +0000 UTC m=+3510.876242383" watchObservedRunningTime="2026-01-27 15:27:35.202630911 +0000 UTC m=+3510.879408376" Jan 27 15:27:41 crc kubenswrapper[4698]: I0127 15:27:41.392706 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:41 crc kubenswrapper[4698]: I0127 15:27:41.393290 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:41 crc kubenswrapper[4698]: I0127 15:27:41.446512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:42 crc kubenswrapper[4698]: I0127 15:27:42.301339 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:42 crc kubenswrapper[4698]: I0127 15:27:42.355393 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.267281 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mtz5b" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="registry-server" containerID="cri-o://5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee" gracePeriod=2 Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.771883 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.856450 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content\") pod \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.856502 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities\") pod \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.856585 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhdj2\" (UniqueName: \"kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2\") pod \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\" (UID: \"aff53f1b-9a19-46b5-9332-3fa6a8401b0f\") " Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.859037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities" (OuterVolumeSpecName: "utilities") pod "aff53f1b-9a19-46b5-9332-3fa6a8401b0f" (UID: "aff53f1b-9a19-46b5-9332-3fa6a8401b0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.863978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2" (OuterVolumeSpecName: "kube-api-access-xhdj2") pod "aff53f1b-9a19-46b5-9332-3fa6a8401b0f" (UID: "aff53f1b-9a19-46b5-9332-3fa6a8401b0f"). InnerVolumeSpecName "kube-api-access-xhdj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.882788 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aff53f1b-9a19-46b5-9332-3fa6a8401b0f" (UID: "aff53f1b-9a19-46b5-9332-3fa6a8401b0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.959419 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.959481 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:44 crc kubenswrapper[4698]: I0127 15:27:44.959497 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhdj2\" (UniqueName: \"kubernetes.io/projected/aff53f1b-9a19-46b5-9332-3fa6a8401b0f-kube-api-access-xhdj2\") on node \"crc\" DevicePath \"\"" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.312238 4698 generic.go:334] "Generic (PLEG): container finished" podID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerID="5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee" exitCode=0 Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.312315 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerDied","Data":"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee"} Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.312375 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtz5b" event={"ID":"aff53f1b-9a19-46b5-9332-3fa6a8401b0f","Type":"ContainerDied","Data":"fd1a9d89236de86a10cbcec364dc921ef93d4cc4f436c7e523a7b2f6807f116d"} Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.312395 4698 scope.go:117] "RemoveContainer" containerID="5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.312607 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtz5b" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.336026 4698 scope.go:117] "RemoveContainer" containerID="2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.350176 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.360051 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtz5b"] Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.363927 4698 scope.go:117] "RemoveContainer" containerID="915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.436256 4698 scope.go:117] "RemoveContainer" containerID="5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee" Jan 27 15:27:45 crc kubenswrapper[4698]: E0127 15:27:45.436797 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee\": container with ID starting with 5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee not found: ID does not exist" containerID="5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.436835 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee"} err="failed to get container status \"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee\": rpc error: code = NotFound desc = could not find container \"5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee\": container with ID starting with 5c763d6a7902cc2f5fe45e30cf2ffe9ed627eff03c8603836f3898918efde7ee not found: ID does not exist" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.436867 4698 scope.go:117] "RemoveContainer" containerID="2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376" Jan 27 15:27:45 crc kubenswrapper[4698]: E0127 15:27:45.437221 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376\": container with ID starting with 2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376 not found: ID does not exist" containerID="2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.437254 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376"} err="failed to get container status \"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376\": rpc error: code = NotFound desc = could not find container \"2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376\": container with ID starting with 2c38621dac036670095a7e688f4c166006765bc0bc77ce2bfeb125b9c805a376 not found: ID does not exist" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.437274 4698 scope.go:117] "RemoveContainer" containerID="915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4" Jan 27 15:27:45 crc kubenswrapper[4698]: E0127 15:27:45.437553 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4\": container with ID starting with 915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4 not found: ID does not exist" containerID="915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4" Jan 27 15:27:45 crc kubenswrapper[4698]: I0127 15:27:45.437585 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4"} err="failed to get container status \"915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4\": rpc error: code = NotFound desc = could not find container \"915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4\": container with ID starting with 915a494a25a92d6132c67300cf2b83dd3181a7c95bc2df94aeb4f34a587fd2e4 not found: ID does not exist" Jan 27 15:27:47 crc kubenswrapper[4698]: I0127 15:27:47.003631 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" path="/var/lib/kubelet/pods/aff53f1b-9a19-46b5-9332-3fa6a8401b0f/volumes" Jan 27 15:27:57 crc kubenswrapper[4698]: I0127 15:27:57.452269 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:27:57 crc kubenswrapper[4698]: I0127 15:27:57.452904 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:27:57 crc kubenswrapper[4698]: I0127 15:27:57.452960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:27:57 crc kubenswrapper[4698]: I0127 15:27:57.453850 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:27:57 crc kubenswrapper[4698]: I0127 15:27:57.453932 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671" gracePeriod=600 Jan 27 15:27:58 crc kubenswrapper[4698]: I0127 15:27:58.436323 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671" exitCode=0 Jan 27 15:27:58 crc kubenswrapper[4698]: I0127 15:27:58.436541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671"} Jan 27 15:27:58 crc kubenswrapper[4698]: I0127 15:27:58.436973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29"} Jan 27 15:27:58 crc kubenswrapper[4698]: I0127 15:27:58.436998 4698 scope.go:117] "RemoveContainer" containerID="14b47c46b0de7c0ecaaaead15a2dd02e8412e1aba78cdbca1c442b1f8a50e507" Jan 27 15:29:57 crc kubenswrapper[4698]: I0127 15:29:57.452136 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:29:57 crc kubenswrapper[4698]: I0127 15:29:57.452925 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.144333 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs"] Jan 27 15:30:00 crc kubenswrapper[4698]: E0127 15:30:00.145112 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.145132 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4698]: E0127 15:30:00.145165 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.145173 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4698]: E0127 15:30:00.145186 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.145192 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.145401 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff53f1b-9a19-46b5-9332-3fa6a8401b0f" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.146276 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.151002 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.151395 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.155981 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs"] Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.162559 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.162650 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.162750 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67vj\" (UniqueName: \"kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.265323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.266298 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.266356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.266397 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67vj\" (UniqueName: \"kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.272421 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.284314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67vj\" (UniqueName: \"kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj\") pod \"collect-profiles-29492130-ftzxs\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.472173 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:00 crc kubenswrapper[4698]: I0127 15:30:00.914111 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs"] Jan 27 15:30:01 crc kubenswrapper[4698]: I0127 15:30:01.531114 4698 generic.go:334] "Generic (PLEG): container finished" podID="a227f17d-5670-4d8c-94c3-7300676f2a71" containerID="fd556c91961bbce6dcca61ccfc6a9fa86b00786aad55d41143d5ae7c396649c2" exitCode=0 Jan 27 15:30:01 crc kubenswrapper[4698]: I0127 15:30:01.531309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" event={"ID":"a227f17d-5670-4d8c-94c3-7300676f2a71","Type":"ContainerDied","Data":"fd556c91961bbce6dcca61ccfc6a9fa86b00786aad55d41143d5ae7c396649c2"} Jan 27 15:30:01 crc kubenswrapper[4698]: I0127 15:30:01.531678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" event={"ID":"a227f17d-5670-4d8c-94c3-7300676f2a71","Type":"ContainerStarted","Data":"6909eedb7bd8374ccb5b6bed5bf46b8d205e9d2659314066cb5f982f0c0483e0"} Jan 27 15:30:02 crc kubenswrapper[4698]: I0127 15:30:02.903673 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:02 crc kubenswrapper[4698]: I0127 15:30:02.918545 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume\") pod \"a227f17d-5670-4d8c-94c3-7300676f2a71\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " Jan 27 15:30:02 crc kubenswrapper[4698]: I0127 15:30:02.918615 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67vj\" (UniqueName: \"kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj\") pod \"a227f17d-5670-4d8c-94c3-7300676f2a71\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " Jan 27 15:30:02 crc kubenswrapper[4698]: I0127 15:30:02.919308 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume" (OuterVolumeSpecName: "config-volume") pod "a227f17d-5670-4d8c-94c3-7300676f2a71" (UID: "a227f17d-5670-4d8c-94c3-7300676f2a71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:02 crc kubenswrapper[4698]: I0127 15:30:02.934904 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj" (OuterVolumeSpecName: "kube-api-access-f67vj") pod "a227f17d-5670-4d8c-94c3-7300676f2a71" (UID: "a227f17d-5670-4d8c-94c3-7300676f2a71"). InnerVolumeSpecName "kube-api-access-f67vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.020789 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume\") pod \"a227f17d-5670-4d8c-94c3-7300676f2a71\" (UID: \"a227f17d-5670-4d8c-94c3-7300676f2a71\") " Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.022113 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a227f17d-5670-4d8c-94c3-7300676f2a71-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.022335 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67vj\" (UniqueName: \"kubernetes.io/projected/a227f17d-5670-4d8c-94c3-7300676f2a71-kube-api-access-f67vj\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.028799 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a227f17d-5670-4d8c-94c3-7300676f2a71" (UID: "a227f17d-5670-4d8c-94c3-7300676f2a71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.124658 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a227f17d-5670-4d8c-94c3-7300676f2a71-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.560728 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" event={"ID":"a227f17d-5670-4d8c-94c3-7300676f2a71","Type":"ContainerDied","Data":"6909eedb7bd8374ccb5b6bed5bf46b8d205e9d2659314066cb5f982f0c0483e0"} Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.560772 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-ftzxs" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.560786 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6909eedb7bd8374ccb5b6bed5bf46b8d205e9d2659314066cb5f982f0c0483e0" Jan 27 15:30:03 crc kubenswrapper[4698]: I0127 15:30:03.997841 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp"] Jan 27 15:30:04 crc kubenswrapper[4698]: I0127 15:30:04.008322 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-hstxp"] Jan 27 15:30:05 crc kubenswrapper[4698]: I0127 15:30:05.495441 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d605254d-214f-423e-a9d6-504e1c8ccf43" path="/var/lib/kubelet/pods/d605254d-214f-423e-a9d6-504e1c8ccf43/volumes" Jan 27 15:30:18 crc kubenswrapper[4698]: I0127 15:30:18.320705 4698 scope.go:117] "RemoveContainer" containerID="c53d8b7433a4b58bb9c5554f564e96ed81a8af00131944699e1dd827b5d5a81c" Jan 27 15:30:27 crc kubenswrapper[4698]: I0127 15:30:27.451796 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:30:27 crc kubenswrapper[4698]: I0127 15:30:27.452416 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:57 crc kubenswrapper[4698]: I0127 15:30:57.452037 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:30:57 crc kubenswrapper[4698]: I0127 15:30:57.452655 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:57 crc kubenswrapper[4698]: I0127 15:30:57.452723 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:30:57 crc kubenswrapper[4698]: I0127 15:30:57.453780 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:30:57 crc kubenswrapper[4698]: I0127 15:30:57.453843 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" gracePeriod=600 Jan 27 15:30:57 crc kubenswrapper[4698]: E0127 15:30:57.590007 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:30:57 crc kubenswrapper[4698]: E0127 15:30:57.606450 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e403fc5_7005_474c_8c75_b7906b481677.slice/crio-conmon-dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29.scope\": RecentStats: unable to find data in memory cache]" Jan 27 15:30:58 crc kubenswrapper[4698]: I0127 15:30:58.002981 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" exitCode=0 Jan 27 15:30:58 crc kubenswrapper[4698]: I0127 15:30:58.003027 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29"} Jan 27 15:30:58 crc kubenswrapper[4698]: I0127 15:30:58.003062 4698 scope.go:117] "RemoveContainer" containerID="e45bb499790ebcaabfa047fa4277b6f16d070419b5faecb3c750a7caac950671" Jan 27 15:30:58 crc kubenswrapper[4698]: I0127 15:30:58.003784 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:30:58 crc kubenswrapper[4698]: E0127 15:30:58.004047 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:31:09 crc kubenswrapper[4698]: I0127 15:31:09.992294 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:31:09 crc kubenswrapper[4698]: E0127 15:31:09.993091 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.157504 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:16 crc kubenswrapper[4698]: E0127 15:31:16.158403 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a227f17d-5670-4d8c-94c3-7300676f2a71" containerName="collect-profiles" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.158415 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a227f17d-5670-4d8c-94c3-7300676f2a71" containerName="collect-profiles" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.158692 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a227f17d-5670-4d8c-94c3-7300676f2a71" containerName="collect-profiles" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.160491 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.168522 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.298844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tzj\" (UniqueName: \"kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.299211 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.299395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.400831 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6tzj\" (UniqueName: \"kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.400887 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.400940 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.401803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.402200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.422118 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6tzj\" (UniqueName: \"kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj\") pod \"redhat-operators-jzfhw\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:16 crc kubenswrapper[4698]: I0127 15:31:16.486587 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:17 crc kubenswrapper[4698]: I0127 15:31:17.017492 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:17 crc kubenswrapper[4698]: I0127 15:31:17.169720 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerStarted","Data":"d0d2f8cdca21d63d23825a44740a145d92ab29b251d00a9c88ed2d033e8c5da9"} Jan 27 15:31:18 crc kubenswrapper[4698]: I0127 15:31:18.181915 4698 generic.go:334] "Generic (PLEG): container finished" podID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerID="4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8" exitCode=0 Jan 27 15:31:18 crc kubenswrapper[4698]: I0127 15:31:18.182065 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerDied","Data":"4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8"} Jan 27 15:31:20 crc kubenswrapper[4698]: I0127 15:31:20.200359 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerStarted","Data":"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c"} Jan 27 15:31:21 crc kubenswrapper[4698]: I0127 15:31:21.208981 4698 generic.go:334] "Generic (PLEG): container finished" podID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerID="6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c" exitCode=0 Jan 27 15:31:21 crc kubenswrapper[4698]: I0127 15:31:21.209024 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerDied","Data":"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c"} Jan 27 15:31:22 crc kubenswrapper[4698]: I0127 15:31:22.993120 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:31:22 crc kubenswrapper[4698]: E0127 15:31:22.993938 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:31:25 crc kubenswrapper[4698]: I0127 15:31:25.251134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerStarted","Data":"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4"} Jan 27 15:31:25 crc kubenswrapper[4698]: I0127 15:31:25.276472 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzfhw" podStartSLOduration=3.364206423 podStartE2EDuration="9.276449471s" podCreationTimestamp="2026-01-27 15:31:16 +0000 UTC" firstStartedPulling="2026-01-27 15:31:18.186174137 +0000 UTC m=+3733.862951602" lastFinishedPulling="2026-01-27 15:31:24.098417185 +0000 UTC m=+3739.775194650" observedRunningTime="2026-01-27 15:31:25.269943789 +0000 UTC m=+3740.946721294" watchObservedRunningTime="2026-01-27 15:31:25.276449471 +0000 UTC m=+3740.953226956" Jan 27 15:31:26 crc kubenswrapper[4698]: I0127 15:31:26.487176 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:26 crc kubenswrapper[4698]: I0127 15:31:26.487514 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:27 crc kubenswrapper[4698]: I0127 15:31:27.531976 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzfhw" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="registry-server" probeResult="failure" output=< Jan 27 15:31:27 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:31:27 crc kubenswrapper[4698]: > Jan 27 15:31:36 crc kubenswrapper[4698]: I0127 15:31:36.533982 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:36 crc kubenswrapper[4698]: I0127 15:31:36.578924 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:36 crc kubenswrapper[4698]: I0127 15:31:36.774100 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:37 crc kubenswrapper[4698]: I0127 15:31:37.993534 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:31:37 crc kubenswrapper[4698]: E0127 15:31:37.995010 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.349761 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzfhw" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="registry-server" containerID="cri-o://98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4" gracePeriod=2 Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.823438 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.912037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content\") pod \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.912326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6tzj\" (UniqueName: \"kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj\") pod \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.912360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities\") pod \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\" (UID: \"93396d92-9abb-4aed-93a4-f47ab2a13e4c\") " Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.913456 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities" (OuterVolumeSpecName: "utilities") pod "93396d92-9abb-4aed-93a4-f47ab2a13e4c" (UID: "93396d92-9abb-4aed-93a4-f47ab2a13e4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:38 crc kubenswrapper[4698]: I0127 15:31:38.923075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj" (OuterVolumeSpecName: "kube-api-access-j6tzj") pod "93396d92-9abb-4aed-93a4-f47ab2a13e4c" (UID: "93396d92-9abb-4aed-93a4-f47ab2a13e4c"). InnerVolumeSpecName "kube-api-access-j6tzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.014795 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6tzj\" (UniqueName: \"kubernetes.io/projected/93396d92-9abb-4aed-93a4-f47ab2a13e4c-kube-api-access-j6tzj\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.014827 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.034417 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93396d92-9abb-4aed-93a4-f47ab2a13e4c" (UID: "93396d92-9abb-4aed-93a4-f47ab2a13e4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.117002 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93396d92-9abb-4aed-93a4-f47ab2a13e4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.364498 4698 generic.go:334] "Generic (PLEG): container finished" podID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerID="98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4" exitCode=0 Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.364555 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerDied","Data":"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4"} Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.364571 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzfhw" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.364602 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzfhw" event={"ID":"93396d92-9abb-4aed-93a4-f47ab2a13e4c","Type":"ContainerDied","Data":"d0d2f8cdca21d63d23825a44740a145d92ab29b251d00a9c88ed2d033e8c5da9"} Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.364680 4698 scope.go:117] "RemoveContainer" containerID="98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.384977 4698 scope.go:117] "RemoveContainer" containerID="6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.399502 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.408678 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzfhw"] Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.432074 4698 scope.go:117] "RemoveContainer" containerID="4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.458874 4698 scope.go:117] "RemoveContainer" containerID="98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4" Jan 27 15:31:39 crc kubenswrapper[4698]: E0127 15:31:39.459626 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4\": container with ID starting with 98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4 not found: ID does not exist" containerID="98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.459748 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4"} err="failed to get container status \"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4\": rpc error: code = NotFound desc = could not find container \"98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4\": container with ID starting with 98127f3311e7e4c3519975407a1f82be211172e7d923a502f3f77fc25a82f7a4 not found: ID does not exist" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.459770 4698 scope.go:117] "RemoveContainer" containerID="6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c" Jan 27 15:31:39 crc kubenswrapper[4698]: E0127 15:31:39.460254 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c\": container with ID starting with 6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c not found: ID does not exist" containerID="6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.460275 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c"} err="failed to get container status \"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c\": rpc error: code = NotFound desc = could not find container \"6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c\": container with ID starting with 6ceb95173cbe13a8c9a7000c515c2946b3eb154cc8cd44737d982bbf5b1edd8c not found: ID does not exist" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.460287 4698 scope.go:117] "RemoveContainer" containerID="4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8" Jan 27 15:31:39 crc kubenswrapper[4698]: E0127 15:31:39.460526 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8\": container with ID starting with 4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8 not found: ID does not exist" containerID="4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8" Jan 27 15:31:39 crc kubenswrapper[4698]: I0127 15:31:39.460548 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8"} err="failed to get container status \"4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8\": rpc error: code = NotFound desc = could not find container \"4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8\": container with ID starting with 4acac5886afab3dce1ef74f2704dd118804c966f39925dbfefc683c0830852e8 not found: ID does not exist" Jan 27 15:31:41 crc kubenswrapper[4698]: I0127 15:31:41.003347 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" path="/var/lib/kubelet/pods/93396d92-9abb-4aed-93a4-f47ab2a13e4c/volumes" Jan 27 15:31:50 crc kubenswrapper[4698]: I0127 15:31:50.992258 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:31:50 crc kubenswrapper[4698]: E0127 15:31:50.993404 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:32:01 crc kubenswrapper[4698]: I0127 15:32:01.991929 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:32:01 crc kubenswrapper[4698]: E0127 15:32:01.992579 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:32:13 crc kubenswrapper[4698]: I0127 15:32:13.993739 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:32:13 crc kubenswrapper[4698]: E0127 15:32:13.994999 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:32:28 crc kubenswrapper[4698]: I0127 15:32:28.994932 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:32:28 crc kubenswrapper[4698]: E0127 15:32:28.996793 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:32:43 crc kubenswrapper[4698]: I0127 15:32:43.992272 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:32:43 crc kubenswrapper[4698]: E0127 15:32:43.993056 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:32:56 crc kubenswrapper[4698]: I0127 15:32:56.993011 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:32:56 crc kubenswrapper[4698]: E0127 15:32:56.995306 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:33:07 crc kubenswrapper[4698]: I0127 15:33:07.992568 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:33:07 crc kubenswrapper[4698]: E0127 15:33:07.993585 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:33:18 crc kubenswrapper[4698]: I0127 15:33:18.992919 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:33:18 crc kubenswrapper[4698]: E0127 15:33:18.993782 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:33:33 crc kubenswrapper[4698]: I0127 15:33:33.992975 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:33:33 crc kubenswrapper[4698]: E0127 15:33:33.994102 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:33:45 crc kubenswrapper[4698]: I0127 15:33:45.992140 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:33:45 crc kubenswrapper[4698]: E0127 15:33:45.993036 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:34:00 crc kubenswrapper[4698]: I0127 15:34:00.993198 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:34:00 crc kubenswrapper[4698]: E0127 15:34:00.993928 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:34:13 crc kubenswrapper[4698]: I0127 15:34:13.993133 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:34:13 crc kubenswrapper[4698]: E0127 15:34:13.994013 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:34:26 crc kubenswrapper[4698]: I0127 15:34:26.992536 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:34:26 crc kubenswrapper[4698]: E0127 15:34:26.993314 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:34:37 crc kubenswrapper[4698]: I0127 15:34:37.992973 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:34:37 crc kubenswrapper[4698]: E0127 15:34:37.994006 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:34:52 crc kubenswrapper[4698]: I0127 15:34:52.993609 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:34:52 crc kubenswrapper[4698]: E0127 15:34:52.994412 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:35:05 crc kubenswrapper[4698]: I0127 15:35:05.005041 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:35:05 crc kubenswrapper[4698]: E0127 15:35:05.005732 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:35:19 crc kubenswrapper[4698]: I0127 15:35:19.992208 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:35:19 crc kubenswrapper[4698]: E0127 15:35:19.993225 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:35:30 crc kubenswrapper[4698]: I0127 15:35:30.992595 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:35:30 crc kubenswrapper[4698]: E0127 15:35:30.993866 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:35:45 crc kubenswrapper[4698]: I0127 15:35:45.992794 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:35:45 crc kubenswrapper[4698]: E0127 15:35:45.993792 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:35:57 crc kubenswrapper[4698]: I0127 15:35:57.992952 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.715099 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87"} Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.867936 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:35:58 crc kubenswrapper[4698]: E0127 15:35:58.869504 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="registry-server" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.869595 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="registry-server" Jan 27 15:35:58 crc kubenswrapper[4698]: E0127 15:35:58.869972 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="extract-content" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.870046 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="extract-content" Jan 27 15:35:58 crc kubenswrapper[4698]: E0127 15:35:58.870120 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="extract-utilities" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.870178 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="extract-utilities" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.870430 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="93396d92-9abb-4aed-93a4-f47ab2a13e4c" containerName="registry-server" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.872471 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.902012 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.979683 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.979987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:58 crc kubenswrapper[4698]: I0127 15:35:58.980134 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.082538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.082630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.082755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.083260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.084549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.116609 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg\") pod \"certified-operators-vqq8j\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.201088 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:35:59 crc kubenswrapper[4698]: I0127 15:35:59.897442 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:36:00 crc kubenswrapper[4698]: I0127 15:36:00.735830 4698 generic.go:334] "Generic (PLEG): container finished" podID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerID="a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714" exitCode=0 Jan 27 15:36:00 crc kubenswrapper[4698]: I0127 15:36:00.735933 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerDied","Data":"a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714"} Jan 27 15:36:00 crc kubenswrapper[4698]: I0127 15:36:00.742803 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerStarted","Data":"7e6c44d082066195005191c1635d889a7b76ca29742098c42203ff4b92ed2190"} Jan 27 15:36:00 crc kubenswrapper[4698]: I0127 15:36:00.738442 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:36:02 crc kubenswrapper[4698]: I0127 15:36:02.766755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerStarted","Data":"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c"} Jan 27 15:36:05 crc kubenswrapper[4698]: I0127 15:36:05.795353 4698 generic.go:334] "Generic (PLEG): container finished" podID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerID="b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c" exitCode=0 Jan 27 15:36:05 crc kubenswrapper[4698]: I0127 15:36:05.795546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerDied","Data":"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c"} Jan 27 15:36:07 crc kubenswrapper[4698]: I0127 15:36:07.815041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerStarted","Data":"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5"} Jan 27 15:36:07 crc kubenswrapper[4698]: I0127 15:36:07.839356 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vqq8j" podStartSLOduration=3.171739837 podStartE2EDuration="9.839332453s" podCreationTimestamp="2026-01-27 15:35:58 +0000 UTC" firstStartedPulling="2026-01-27 15:36:00.738238942 +0000 UTC m=+4016.415016407" lastFinishedPulling="2026-01-27 15:36:07.405831558 +0000 UTC m=+4023.082609023" observedRunningTime="2026-01-27 15:36:07.834899526 +0000 UTC m=+4023.511676981" watchObservedRunningTime="2026-01-27 15:36:07.839332453 +0000 UTC m=+4023.516109928" Jan 27 15:36:09 crc kubenswrapper[4698]: I0127 15:36:09.201306 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:09 crc kubenswrapper[4698]: I0127 15:36:09.201593 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:09 crc kubenswrapper[4698]: I0127 15:36:09.248074 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:19 crc kubenswrapper[4698]: I0127 15:36:19.248841 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:19 crc kubenswrapper[4698]: I0127 15:36:19.299840 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:36:19 crc kubenswrapper[4698]: I0127 15:36:19.914046 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vqq8j" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="registry-server" containerID="cri-o://f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5" gracePeriod=2 Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.522303 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.699217 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities\") pod \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.699281 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content\") pod \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.699573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg\") pod \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\" (UID: \"06c013d3-5bb5-4b9e-8590-6d79b2294d13\") " Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.700433 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities" (OuterVolumeSpecName: "utilities") pod "06c013d3-5bb5-4b9e-8590-6d79b2294d13" (UID: "06c013d3-5bb5-4b9e-8590-6d79b2294d13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.705732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg" (OuterVolumeSpecName: "kube-api-access-6bhqg") pod "06c013d3-5bb5-4b9e-8590-6d79b2294d13" (UID: "06c013d3-5bb5-4b9e-8590-6d79b2294d13"). InnerVolumeSpecName "kube-api-access-6bhqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.754850 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c013d3-5bb5-4b9e-8590-6d79b2294d13" (UID: "06c013d3-5bb5-4b9e-8590-6d79b2294d13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.802367 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/06c013d3-5bb5-4b9e-8590-6d79b2294d13-kube-api-access-6bhqg\") on node \"crc\" DevicePath \"\"" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.802433 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.802446 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c013d3-5bb5-4b9e-8590-6d79b2294d13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.924830 4698 generic.go:334] "Generic (PLEG): container finished" podID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerID="f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5" exitCode=0 Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.924873 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerDied","Data":"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5"} Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.924903 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqq8j" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.924926 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqq8j" event={"ID":"06c013d3-5bb5-4b9e-8590-6d79b2294d13","Type":"ContainerDied","Data":"7e6c44d082066195005191c1635d889a7b76ca29742098c42203ff4b92ed2190"} Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.924946 4698 scope.go:117] "RemoveContainer" containerID="f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.948790 4698 scope.go:117] "RemoveContainer" containerID="b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.963778 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.976791 4698 scope.go:117] "RemoveContainer" containerID="a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714" Jan 27 15:36:20 crc kubenswrapper[4698]: I0127 15:36:20.982203 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vqq8j"] Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.006610 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" path="/var/lib/kubelet/pods/06c013d3-5bb5-4b9e-8590-6d79b2294d13/volumes" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.021725 4698 scope.go:117] "RemoveContainer" containerID="f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5" Jan 27 15:36:21 crc kubenswrapper[4698]: E0127 15:36:21.022275 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5\": container with ID starting with f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5 not found: ID does not exist" containerID="f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.022315 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5"} err="failed to get container status \"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5\": rpc error: code = NotFound desc = could not find container \"f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5\": container with ID starting with f749d09e42889772acc5bcf919ebbf5f53ba2de9d3a1a281884a118189a667d5 not found: ID does not exist" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.022342 4698 scope.go:117] "RemoveContainer" containerID="b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c" Jan 27 15:36:21 crc kubenswrapper[4698]: E0127 15:36:21.022690 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c\": container with ID starting with b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c not found: ID does not exist" containerID="b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.022719 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c"} err="failed to get container status \"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c\": rpc error: code = NotFound desc = could not find container \"b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c\": container with ID starting with b5437c363e3157b1d3a586a9ef146f108716930472bf608e5ca3976c50f51d5c not found: ID does not exist" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.022737 4698 scope.go:117] "RemoveContainer" containerID="a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714" Jan 27 15:36:21 crc kubenswrapper[4698]: E0127 15:36:21.023149 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714\": container with ID starting with a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714 not found: ID does not exist" containerID="a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714" Jan 27 15:36:21 crc kubenswrapper[4698]: I0127 15:36:21.023174 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714"} err="failed to get container status \"a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714\": rpc error: code = NotFound desc = could not find container \"a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714\": container with ID starting with a8c4f8683485fa9b15713a91a3c8724726555a01102cecc3879f69f59a85a714 not found: ID does not exist" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.864512 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:36:52 crc kubenswrapper[4698]: E0127 15:36:52.865617 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="extract-content" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.865671 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="extract-content" Jan 27 15:36:52 crc kubenswrapper[4698]: E0127 15:36:52.865736 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="registry-server" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.865747 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="registry-server" Jan 27 15:36:52 crc kubenswrapper[4698]: E0127 15:36:52.865779 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="extract-utilities" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.865788 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="extract-utilities" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.866051 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c013d3-5bb5-4b9e-8590-6d79b2294d13" containerName="registry-server" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.867882 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.874991 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.970801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.970952 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxbr\" (UniqueName: \"kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:52 crc kubenswrapper[4698]: I0127 15:36:52.971007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.072895 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxbr\" (UniqueName: \"kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.072956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.073074 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.073533 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.073805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.095223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxbr\" (UniqueName: \"kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr\") pod \"community-operators-nxxw7\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.203426 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:36:53 crc kubenswrapper[4698]: I0127 15:36:53.775605 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:36:54 crc kubenswrapper[4698]: I0127 15:36:54.214967 4698 generic.go:334] "Generic (PLEG): container finished" podID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerID="1d0d89d17e23309a8e9e41dfd097c530c62947baaddaaa0fb3b85375a506bc4f" exitCode=0 Jan 27 15:36:54 crc kubenswrapper[4698]: I0127 15:36:54.215098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerDied","Data":"1d0d89d17e23309a8e9e41dfd097c530c62947baaddaaa0fb3b85375a506bc4f"} Jan 27 15:36:54 crc kubenswrapper[4698]: I0127 15:36:54.215433 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerStarted","Data":"ecfe981520dc02f0e0569ccc399b8f59c8d35009f3e4daef078656470f686bb9"} Jan 27 15:36:56 crc kubenswrapper[4698]: I0127 15:36:56.239510 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerStarted","Data":"f6c2d808b087f47e1687bf4721f53b8177cedc09d4123fc08c286a4bd04088ac"} Jan 27 15:36:58 crc kubenswrapper[4698]: I0127 15:36:58.258020 4698 generic.go:334] "Generic (PLEG): container finished" podID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerID="f6c2d808b087f47e1687bf4721f53b8177cedc09d4123fc08c286a4bd04088ac" exitCode=0 Jan 27 15:36:58 crc kubenswrapper[4698]: I0127 15:36:58.258202 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerDied","Data":"f6c2d808b087f47e1687bf4721f53b8177cedc09d4123fc08c286a4bd04088ac"} Jan 27 15:36:59 crc kubenswrapper[4698]: I0127 15:36:59.268975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerStarted","Data":"e61e1a5350c38f2a9b49dd108a04de6be6820684b42227240823bd0534130745"} Jan 27 15:36:59 crc kubenswrapper[4698]: I0127 15:36:59.300830 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nxxw7" podStartSLOduration=2.678767753 podStartE2EDuration="7.30080481s" podCreationTimestamp="2026-01-27 15:36:52 +0000 UTC" firstStartedPulling="2026-01-27 15:36:54.217974539 +0000 UTC m=+4069.894752014" lastFinishedPulling="2026-01-27 15:36:58.840011606 +0000 UTC m=+4074.516789071" observedRunningTime="2026-01-27 15:36:59.296299732 +0000 UTC m=+4074.973077197" watchObservedRunningTime="2026-01-27 15:36:59.30080481 +0000 UTC m=+4074.977582295" Jan 27 15:37:03 crc kubenswrapper[4698]: I0127 15:37:03.205040 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:03 crc kubenswrapper[4698]: I0127 15:37:03.205610 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:03 crc kubenswrapper[4698]: I0127 15:37:03.258449 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:13 crc kubenswrapper[4698]: I0127 15:37:13.855752 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:13 crc kubenswrapper[4698]: I0127 15:37:13.902462 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:37:14 crc kubenswrapper[4698]: I0127 15:37:14.394899 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nxxw7" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="registry-server" containerID="cri-o://e61e1a5350c38f2a9b49dd108a04de6be6820684b42227240823bd0534130745" gracePeriod=2 Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.418575 4698 generic.go:334] "Generic (PLEG): container finished" podID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerID="e61e1a5350c38f2a9b49dd108a04de6be6820684b42227240823bd0534130745" exitCode=0 Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.418783 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerDied","Data":"e61e1a5350c38f2a9b49dd108a04de6be6820684b42227240823bd0534130745"} Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.418948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxw7" event={"ID":"daef547c-9a3c-477a-9756-a1f0b30941e0","Type":"ContainerDied","Data":"ecfe981520dc02f0e0569ccc399b8f59c8d35009f3e4daef078656470f686bb9"} Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.418969 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecfe981520dc02f0e0569ccc399b8f59c8d35009f3e4daef078656470f686bb9" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.431729 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.554214 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities\") pod \"daef547c-9a3c-477a-9756-a1f0b30941e0\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.554336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fxbr\" (UniqueName: \"kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr\") pod \"daef547c-9a3c-477a-9756-a1f0b30941e0\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.554498 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content\") pod \"daef547c-9a3c-477a-9756-a1f0b30941e0\" (UID: \"daef547c-9a3c-477a-9756-a1f0b30941e0\") " Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.555231 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities" (OuterVolumeSpecName: "utilities") pod "daef547c-9a3c-477a-9756-a1f0b30941e0" (UID: "daef547c-9a3c-477a-9756-a1f0b30941e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.562402 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr" (OuterVolumeSpecName: "kube-api-access-8fxbr") pod "daef547c-9a3c-477a-9756-a1f0b30941e0" (UID: "daef547c-9a3c-477a-9756-a1f0b30941e0"). InnerVolumeSpecName "kube-api-access-8fxbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.564799 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fxbr\" (UniqueName: \"kubernetes.io/projected/daef547c-9a3c-477a-9756-a1f0b30941e0-kube-api-access-8fxbr\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.564842 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.609017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daef547c-9a3c-477a-9756-a1f0b30941e0" (UID: "daef547c-9a3c-477a-9756-a1f0b30941e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:15 crc kubenswrapper[4698]: I0127 15:37:15.666482 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daef547c-9a3c-477a-9756-a1f0b30941e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:16 crc kubenswrapper[4698]: I0127 15:37:16.427482 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxw7" Jan 27 15:37:16 crc kubenswrapper[4698]: I0127 15:37:16.462737 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:37:16 crc kubenswrapper[4698]: I0127 15:37:16.471471 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nxxw7"] Jan 27 15:37:17 crc kubenswrapper[4698]: I0127 15:37:17.007090 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" path="/var/lib/kubelet/pods/daef547c-9a3c-477a-9756-a1f0b30941e0/volumes" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.580087 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:38 crc kubenswrapper[4698]: E0127 15:37:38.580906 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="registry-server" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.580919 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="registry-server" Jan 27 15:37:38 crc kubenswrapper[4698]: E0127 15:37:38.580943 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="extract-utilities" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.580950 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="extract-utilities" Jan 27 15:37:38 crc kubenswrapper[4698]: E0127 15:37:38.580963 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="extract-content" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.580970 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="extract-content" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.581151 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="daef547c-9a3c-477a-9756-a1f0b30941e0" containerName="registry-server" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.582708 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.592038 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.653084 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.653448 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56jb\" (UniqueName: \"kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.653600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.755537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f56jb\" (UniqueName: \"kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.755621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.755778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.756258 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.756267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.784790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f56jb\" (UniqueName: \"kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb\") pod \"redhat-marketplace-mhnq7\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:38 crc kubenswrapper[4698]: I0127 15:37:38.905349 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:39 crc kubenswrapper[4698]: I0127 15:37:39.387206 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:39 crc kubenswrapper[4698]: I0127 15:37:39.642383 4698 generic.go:334] "Generic (PLEG): container finished" podID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerID="f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f" exitCode=0 Jan 27 15:37:39 crc kubenswrapper[4698]: I0127 15:37:39.642437 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerDied","Data":"f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f"} Jan 27 15:37:39 crc kubenswrapper[4698]: I0127 15:37:39.642469 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerStarted","Data":"5fe1fa4c4fc774efe73d584c925a4f70b5f21b40e3e08b1d689afa10a05ab6bc"} Jan 27 15:37:41 crc kubenswrapper[4698]: I0127 15:37:41.664901 4698 generic.go:334] "Generic (PLEG): container finished" podID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerID="82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1" exitCode=0 Jan 27 15:37:41 crc kubenswrapper[4698]: I0127 15:37:41.665104 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerDied","Data":"82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1"} Jan 27 15:37:42 crc kubenswrapper[4698]: I0127 15:37:42.687124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerStarted","Data":"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1"} Jan 27 15:37:42 crc kubenswrapper[4698]: I0127 15:37:42.708544 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhnq7" podStartSLOduration=2.150711184 podStartE2EDuration="4.708525572s" podCreationTimestamp="2026-01-27 15:37:38 +0000 UTC" firstStartedPulling="2026-01-27 15:37:39.644578887 +0000 UTC m=+4115.321356362" lastFinishedPulling="2026-01-27 15:37:42.202393285 +0000 UTC m=+4117.879170750" observedRunningTime="2026-01-27 15:37:42.707571708 +0000 UTC m=+4118.384349173" watchObservedRunningTime="2026-01-27 15:37:42.708525572 +0000 UTC m=+4118.385303037" Jan 27 15:37:48 crc kubenswrapper[4698]: I0127 15:37:48.906014 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:48 crc kubenswrapper[4698]: I0127 15:37:48.907605 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:48 crc kubenswrapper[4698]: I0127 15:37:48.955278 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:49 crc kubenswrapper[4698]: I0127 15:37:49.889384 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:53 crc kubenswrapper[4698]: I0127 15:37:53.756115 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:53 crc kubenswrapper[4698]: I0127 15:37:53.757820 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhnq7" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="registry-server" containerID="cri-o://a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1" gracePeriod=2 Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.313800 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.469521 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content\") pod \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.469734 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f56jb\" (UniqueName: \"kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb\") pod \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.469860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities\") pod \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\" (UID: \"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c\") " Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.470850 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities" (OuterVolumeSpecName: "utilities") pod "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" (UID: "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.478908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb" (OuterVolumeSpecName: "kube-api-access-f56jb") pod "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" (UID: "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c"). InnerVolumeSpecName "kube-api-access-f56jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.498782 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" (UID: "9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.573051 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.573104 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f56jb\" (UniqueName: \"kubernetes.io/projected/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-kube-api-access-f56jb\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.573124 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.796510 4698 generic.go:334] "Generic (PLEG): container finished" podID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerID="a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1" exitCode=0 Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.796575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerDied","Data":"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1"} Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.796611 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhnq7" event={"ID":"9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c","Type":"ContainerDied","Data":"5fe1fa4c4fc774efe73d584c925a4f70b5f21b40e3e08b1d689afa10a05ab6bc"} Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.796579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhnq7" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.796660 4698 scope.go:117] "RemoveContainer" containerID="a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.833973 4698 scope.go:117] "RemoveContainer" containerID="82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.839278 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.849845 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhnq7"] Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.862506 4698 scope.go:117] "RemoveContainer" containerID="f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.922158 4698 scope.go:117] "RemoveContainer" containerID="a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1" Jan 27 15:37:54 crc kubenswrapper[4698]: E0127 15:37:54.922539 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1\": container with ID starting with a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1 not found: ID does not exist" containerID="a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.922577 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1"} err="failed to get container status \"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1\": rpc error: code = NotFound desc = could not find container \"a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1\": container with ID starting with a865329d6b5e9c489903f411d787fc046024027f90ce01674caf56e82e5308a1 not found: ID does not exist" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.922600 4698 scope.go:117] "RemoveContainer" containerID="82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1" Jan 27 15:37:54 crc kubenswrapper[4698]: E0127 15:37:54.923069 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1\": container with ID starting with 82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1 not found: ID does not exist" containerID="82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.923114 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1"} err="failed to get container status \"82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1\": rpc error: code = NotFound desc = could not find container \"82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1\": container with ID starting with 82ee342447db69dae1ec04e82774982dae07bba8536c30dddb9cc3e378d566c1 not found: ID does not exist" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.923138 4698 scope.go:117] "RemoveContainer" containerID="f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f" Jan 27 15:37:54 crc kubenswrapper[4698]: E0127 15:37:54.924378 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f\": container with ID starting with f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f not found: ID does not exist" containerID="f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f" Jan 27 15:37:54 crc kubenswrapper[4698]: I0127 15:37:54.924415 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f"} err="failed to get container status \"f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f\": rpc error: code = NotFound desc = could not find container \"f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f\": container with ID starting with f6ecc8cb6e1f3f5b8a710a3229bb929b71caeaf19e7c367ee8cc90040d03e61f not found: ID does not exist" Jan 27 15:37:55 crc kubenswrapper[4698]: I0127 15:37:55.003133 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" path="/var/lib/kubelet/pods/9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c/volumes" Jan 27 15:38:27 crc kubenswrapper[4698]: I0127 15:38:27.451380 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:38:27 crc kubenswrapper[4698]: I0127 15:38:27.452028 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:38:57 crc kubenswrapper[4698]: I0127 15:38:57.451263 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:38:57 crc kubenswrapper[4698]: I0127 15:38:57.452024 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.452004 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.452694 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.452742 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.453475 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.453531 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87" gracePeriod=600 Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.723280 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87" exitCode=0 Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.723324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87"} Jan 27 15:39:27 crc kubenswrapper[4698]: I0127 15:39:27.723545 4698 scope.go:117] "RemoveContainer" containerID="dae5e3c59114f911468df8553d052a2c9114f0c5defcd8a1e8e483efe8a4ea29" Jan 27 15:39:28 crc kubenswrapper[4698]: I0127 15:39:28.734056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e"} Jan 27 15:41:27 crc kubenswrapper[4698]: I0127 15:41:27.452026 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:27 crc kubenswrapper[4698]: I0127 15:41:27.452566 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:41:57 crc kubenswrapper[4698]: I0127 15:41:57.453141 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:57 crc kubenswrapper[4698]: I0127 15:41:57.453690 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.509844 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:41:58 crc kubenswrapper[4698]: E0127 15:41:58.510272 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="registry-server" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.510285 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="registry-server" Jan 27 15:41:58 crc kubenswrapper[4698]: E0127 15:41:58.510304 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="extract-content" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.510310 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="extract-content" Jan 27 15:41:58 crc kubenswrapper[4698]: E0127 15:41:58.510327 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="extract-utilities" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.510334 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="extract-utilities" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.510509 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9caa2ae5-ff6a-4d79-86cf-1a4bcf17162c" containerName="registry-server" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.511877 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.523730 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.794239 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.794310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvz7\" (UniqueName: \"kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.794511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.896152 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.896328 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.896360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlvz7\" (UniqueName: \"kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.896761 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.896836 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:58 crc kubenswrapper[4698]: I0127 15:41:58.925029 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlvz7\" (UniqueName: \"kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7\") pod \"redhat-operators-dx9n8\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:59 crc kubenswrapper[4698]: I0127 15:41:59.009539 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:41:59 crc kubenswrapper[4698]: I0127 15:41:59.542186 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:42:00 crc kubenswrapper[4698]: I0127 15:42:00.351776 4698 generic.go:334] "Generic (PLEG): container finished" podID="e37eb15d-93ee-4607-92f4-02457685a69c" containerID="64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1" exitCode=0 Jan 27 15:42:00 crc kubenswrapper[4698]: I0127 15:42:00.351853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerDied","Data":"64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1"} Jan 27 15:42:00 crc kubenswrapper[4698]: I0127 15:42:00.352096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerStarted","Data":"f8ddb00cdbda585e6717b6868f7ddfa4f22a84ab914a0d15e054745a3a45fc07"} Jan 27 15:42:00 crc kubenswrapper[4698]: I0127 15:42:00.354017 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:42:03 crc kubenswrapper[4698]: I0127 15:42:03.377015 4698 generic.go:334] "Generic (PLEG): container finished" podID="e37eb15d-93ee-4607-92f4-02457685a69c" containerID="df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502" exitCode=0 Jan 27 15:42:03 crc kubenswrapper[4698]: I0127 15:42:03.377057 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerDied","Data":"df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502"} Jan 27 15:42:05 crc kubenswrapper[4698]: I0127 15:42:05.397751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerStarted","Data":"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f"} Jan 27 15:42:05 crc kubenswrapper[4698]: I0127 15:42:05.419368 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dx9n8" podStartSLOduration=3.70786319 podStartE2EDuration="7.419347721s" podCreationTimestamp="2026-01-27 15:41:58 +0000 UTC" firstStartedPulling="2026-01-27 15:42:00.353824616 +0000 UTC m=+4376.030602081" lastFinishedPulling="2026-01-27 15:42:04.065309147 +0000 UTC m=+4379.742086612" observedRunningTime="2026-01-27 15:42:05.417500162 +0000 UTC m=+4381.094277647" watchObservedRunningTime="2026-01-27 15:42:05.419347721 +0000 UTC m=+4381.096125186" Jan 27 15:42:09 crc kubenswrapper[4698]: I0127 15:42:09.010933 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:09 crc kubenswrapper[4698]: I0127 15:42:09.011584 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:10 crc kubenswrapper[4698]: I0127 15:42:10.913234 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dx9n8" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="registry-server" probeResult="failure" output=< Jan 27 15:42:10 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:42:10 crc kubenswrapper[4698]: > Jan 27 15:42:19 crc kubenswrapper[4698]: I0127 15:42:19.060069 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:19 crc kubenswrapper[4698]: I0127 15:42:19.114608 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:19 crc kubenswrapper[4698]: I0127 15:42:19.299654 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:42:20 crc kubenswrapper[4698]: I0127 15:42:20.552698 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dx9n8" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="registry-server" containerID="cri-o://180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f" gracePeriod=2 Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.059579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.119324 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content\") pod \"e37eb15d-93ee-4607-92f4-02457685a69c\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.120041 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities\") pod \"e37eb15d-93ee-4607-92f4-02457685a69c\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.120172 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlvz7\" (UniqueName: \"kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7\") pod \"e37eb15d-93ee-4607-92f4-02457685a69c\" (UID: \"e37eb15d-93ee-4607-92f4-02457685a69c\") " Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.121820 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities" (OuterVolumeSpecName: "utilities") pod "e37eb15d-93ee-4607-92f4-02457685a69c" (UID: "e37eb15d-93ee-4607-92f4-02457685a69c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.129021 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7" (OuterVolumeSpecName: "kube-api-access-qlvz7") pod "e37eb15d-93ee-4607-92f4-02457685a69c" (UID: "e37eb15d-93ee-4607-92f4-02457685a69c"). InnerVolumeSpecName "kube-api-access-qlvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.222110 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlvz7\" (UniqueName: \"kubernetes.io/projected/e37eb15d-93ee-4607-92f4-02457685a69c-kube-api-access-qlvz7\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.222163 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.243607 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e37eb15d-93ee-4607-92f4-02457685a69c" (UID: "e37eb15d-93ee-4607-92f4-02457685a69c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.324044 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37eb15d-93ee-4607-92f4-02457685a69c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.563032 4698 generic.go:334] "Generic (PLEG): container finished" podID="e37eb15d-93ee-4607-92f4-02457685a69c" containerID="180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f" exitCode=0 Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.563080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerDied","Data":"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f"} Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.563114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx9n8" event={"ID":"e37eb15d-93ee-4607-92f4-02457685a69c","Type":"ContainerDied","Data":"f8ddb00cdbda585e6717b6868f7ddfa4f22a84ab914a0d15e054745a3a45fc07"} Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.563107 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx9n8" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.563129 4698 scope.go:117] "RemoveContainer" containerID="180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.602357 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.605897 4698 scope.go:117] "RemoveContainer" containerID="df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.611383 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dx9n8"] Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.651865 4698 scope.go:117] "RemoveContainer" containerID="64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.704555 4698 scope.go:117] "RemoveContainer" containerID="180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f" Jan 27 15:42:21 crc kubenswrapper[4698]: E0127 15:42:21.705525 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f\": container with ID starting with 180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f not found: ID does not exist" containerID="180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.705590 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f"} err="failed to get container status \"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f\": rpc error: code = NotFound desc = could not find container \"180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f\": container with ID starting with 180ecf08a24bc0b796a92219bfd08b55194f6c262fae063ff12be5395f39911f not found: ID does not exist" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.705624 4698 scope.go:117] "RemoveContainer" containerID="df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502" Jan 27 15:42:21 crc kubenswrapper[4698]: E0127 15:42:21.705992 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502\": container with ID starting with df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502 not found: ID does not exist" containerID="df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.706025 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502"} err="failed to get container status \"df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502\": rpc error: code = NotFound desc = could not find container \"df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502\": container with ID starting with df2bf9b3c7c646c1261158f869895db40a565b78dfc5df6f1e4cf82904522502 not found: ID does not exist" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.706043 4698 scope.go:117] "RemoveContainer" containerID="64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1" Jan 27 15:42:21 crc kubenswrapper[4698]: E0127 15:42:21.706319 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1\": container with ID starting with 64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1 not found: ID does not exist" containerID="64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1" Jan 27 15:42:21 crc kubenswrapper[4698]: I0127 15:42:21.706349 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1"} err="failed to get container status \"64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1\": rpc error: code = NotFound desc = could not find container \"64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1\": container with ID starting with 64603db78b3735a00dc5194b4a935569f0db823829361835de280552ea84e4e1 not found: ID does not exist" Jan 27 15:42:23 crc kubenswrapper[4698]: I0127 15:42:23.004506 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" path="/var/lib/kubelet/pods/e37eb15d-93ee-4607-92f4-02457685a69c/volumes" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.451708 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.452166 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.452214 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.452742 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.452807 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" gracePeriod=600 Jan 27 15:42:27 crc kubenswrapper[4698]: E0127 15:42:27.608508 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.635283 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" exitCode=0 Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.635341 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e"} Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.635380 4698 scope.go:117] "RemoveContainer" containerID="c3c99fc65dd6707b96677a0c5e922864d9b7f447d7ff5802b3944061d58f6b87" Jan 27 15:42:27 crc kubenswrapper[4698]: I0127 15:42:27.636472 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:42:27 crc kubenswrapper[4698]: E0127 15:42:27.636857 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:42:42 crc kubenswrapper[4698]: I0127 15:42:42.992089 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:42:42 crc kubenswrapper[4698]: E0127 15:42:42.993037 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:42:55 crc kubenswrapper[4698]: I0127 15:42:55.992555 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:42:55 crc kubenswrapper[4698]: E0127 15:42:55.993489 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:43:10 crc kubenswrapper[4698]: I0127 15:43:10.992574 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:43:10 crc kubenswrapper[4698]: E0127 15:43:10.993382 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:43:18 crc kubenswrapper[4698]: I0127 15:43:18.641575 4698 scope.go:117] "RemoveContainer" containerID="1d0d89d17e23309a8e9e41dfd097c530c62947baaddaaa0fb3b85375a506bc4f" Jan 27 15:43:19 crc kubenswrapper[4698]: I0127 15:43:19.429131 4698 scope.go:117] "RemoveContainer" containerID="f6c2d808b087f47e1687bf4721f53b8177cedc09d4123fc08c286a4bd04088ac" Jan 27 15:43:19 crc kubenswrapper[4698]: I0127 15:43:19.581081 4698 scope.go:117] "RemoveContainer" containerID="e61e1a5350c38f2a9b49dd108a04de6be6820684b42227240823bd0534130745" Jan 27 15:43:24 crc kubenswrapper[4698]: I0127 15:43:24.999069 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:43:25 crc kubenswrapper[4698]: E0127 15:43:24.999878 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:43:36 crc kubenswrapper[4698]: I0127 15:43:36.992682 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:43:36 crc kubenswrapper[4698]: E0127 15:43:36.994242 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:43:49 crc kubenswrapper[4698]: I0127 15:43:49.992022 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:43:49 crc kubenswrapper[4698]: E0127 15:43:49.992831 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:44:02 crc kubenswrapper[4698]: I0127 15:44:02.998910 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:44:03 crc kubenswrapper[4698]: E0127 15:44:02.999856 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:44:13 crc kubenswrapper[4698]: I0127 15:44:13.992702 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:44:13 crc kubenswrapper[4698]: E0127 15:44:13.994250 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:44:27 crc kubenswrapper[4698]: I0127 15:44:27.993612 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:44:27 crc kubenswrapper[4698]: E0127 15:44:27.994743 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:44:39 crc kubenswrapper[4698]: I0127 15:44:39.993708 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:44:39 crc kubenswrapper[4698]: E0127 15:44:39.996359 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:44:51 crc kubenswrapper[4698]: I0127 15:44:51.992578 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:44:51 crc kubenswrapper[4698]: E0127 15:44:51.993767 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.215794 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj"] Jan 27 15:45:00 crc kubenswrapper[4698]: E0127 15:45:00.217125 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.217143 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4698]: E0127 15:45:00.217165 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.217172 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4698]: E0127 15:45:00.217219 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.217227 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.217864 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37eb15d-93ee-4607-92f4-02457685a69c" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.219016 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.222809 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.223279 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.237459 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj"] Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.287901 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmj2\" (UniqueName: \"kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.288078 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.288205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.389684 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmj2\" (UniqueName: \"kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.389785 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.389851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.390850 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.407407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.447551 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmj2\" (UniqueName: \"kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2\") pod \"collect-profiles-29492145-df6gj\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:00 crc kubenswrapper[4698]: I0127 15:45:00.552620 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:01 crc kubenswrapper[4698]: I0127 15:45:01.096057 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj"] Jan 27 15:45:02 crc kubenswrapper[4698]: I0127 15:45:02.016325 4698 generic.go:334] "Generic (PLEG): container finished" podID="40912b04-ad2e-4915-adb0-99a2170171e3" containerID="43a83590b9aba44755de6d1f8c1e5dc81e0b7ca8f8a9365b5dc9091e1aa9070b" exitCode=0 Jan 27 15:45:02 crc kubenswrapper[4698]: I0127 15:45:02.016378 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" event={"ID":"40912b04-ad2e-4915-adb0-99a2170171e3","Type":"ContainerDied","Data":"43a83590b9aba44755de6d1f8c1e5dc81e0b7ca8f8a9365b5dc9091e1aa9070b"} Jan 27 15:45:02 crc kubenswrapper[4698]: I0127 15:45:02.016695 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" event={"ID":"40912b04-ad2e-4915-adb0-99a2170171e3","Type":"ContainerStarted","Data":"5d292cc375331f56ee9a7ed7a8d098e4e8709b1aaf65d6da9ac68ebebc0cd666"} Jan 27 15:45:02 crc kubenswrapper[4698]: I0127 15:45:02.993309 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:45:02 crc kubenswrapper[4698]: E0127 15:45:02.993888 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.481274 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.569459 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmj2\" (UniqueName: \"kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2\") pod \"40912b04-ad2e-4915-adb0-99a2170171e3\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.569613 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume\") pod \"40912b04-ad2e-4915-adb0-99a2170171e3\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.569796 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume\") pod \"40912b04-ad2e-4915-adb0-99a2170171e3\" (UID: \"40912b04-ad2e-4915-adb0-99a2170171e3\") " Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.570532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "40912b04-ad2e-4915-adb0-99a2170171e3" (UID: "40912b04-ad2e-4915-adb0-99a2170171e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.575623 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2" (OuterVolumeSpecName: "kube-api-access-blmj2") pod "40912b04-ad2e-4915-adb0-99a2170171e3" (UID: "40912b04-ad2e-4915-adb0-99a2170171e3"). InnerVolumeSpecName "kube-api-access-blmj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.579502 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40912b04-ad2e-4915-adb0-99a2170171e3" (UID: "40912b04-ad2e-4915-adb0-99a2170171e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.672444 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40912b04-ad2e-4915-adb0-99a2170171e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.672768 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40912b04-ad2e-4915-adb0-99a2170171e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4698]: I0127 15:45:03.672782 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmj2\" (UniqueName: \"kubernetes.io/projected/40912b04-ad2e-4915-adb0-99a2170171e3-kube-api-access-blmj2\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:04 crc kubenswrapper[4698]: I0127 15:45:04.038754 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" event={"ID":"40912b04-ad2e-4915-adb0-99a2170171e3","Type":"ContainerDied","Data":"5d292cc375331f56ee9a7ed7a8d098e4e8709b1aaf65d6da9ac68ebebc0cd666"} Jan 27 15:45:04 crc kubenswrapper[4698]: I0127 15:45:04.038792 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d292cc375331f56ee9a7ed7a8d098e4e8709b1aaf65d6da9ac68ebebc0cd666" Jan 27 15:45:04 crc kubenswrapper[4698]: I0127 15:45:04.038916 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-df6gj" Jan 27 15:45:04 crc kubenswrapper[4698]: I0127 15:45:04.568250 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts"] Jan 27 15:45:04 crc kubenswrapper[4698]: I0127 15:45:04.598688 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-pk6ts"] Jan 27 15:45:05 crc kubenswrapper[4698]: I0127 15:45:05.008233 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb5bc37-29d1-4af4-afb2-cd803fb9e924" path="/var/lib/kubelet/pods/0bb5bc37-29d1-4af4-afb2-cd803fb9e924/volumes" Jan 27 15:45:17 crc kubenswrapper[4698]: I0127 15:45:17.991750 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:45:17 crc kubenswrapper[4698]: E0127 15:45:17.992734 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:45:19 crc kubenswrapper[4698]: I0127 15:45:19.678906 4698 scope.go:117] "RemoveContainer" containerID="7a49e9cf65ad15137641b5d7d57c569129fd4cb2131ecfe813df437d88c44a70" Jan 27 15:45:29 crc kubenswrapper[4698]: I0127 15:45:29.992407 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:45:29 crc kubenswrapper[4698]: E0127 15:45:29.993298 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:45:41 crc kubenswrapper[4698]: I0127 15:45:41.993164 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:45:41 crc kubenswrapper[4698]: E0127 15:45:41.993993 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:45:55 crc kubenswrapper[4698]: I0127 15:45:55.992554 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:45:55 crc kubenswrapper[4698]: E0127 15:45:55.993775 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:46:07 crc kubenswrapper[4698]: I0127 15:46:07.992877 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:46:07 crc kubenswrapper[4698]: E0127 15:46:07.999376 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.490894 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:08 crc kubenswrapper[4698]: E0127 15:46:08.491577 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40912b04-ad2e-4915-adb0-99a2170171e3" containerName="collect-profiles" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.491596 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40912b04-ad2e-4915-adb0-99a2170171e3" containerName="collect-profiles" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.491847 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="40912b04-ad2e-4915-adb0-99a2170171e3" containerName="collect-profiles" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.493346 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.506003 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.587345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.587408 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.587479 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44t52\" (UniqueName: \"kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.689243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.689306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.689386 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44t52\" (UniqueName: \"kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.690173 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.690220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.710939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44t52\" (UniqueName: \"kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52\") pod \"certified-operators-czp7w\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:08 crc kubenswrapper[4698]: I0127 15:46:08.821145 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:09 crc kubenswrapper[4698]: I0127 15:46:09.313110 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:09 crc kubenswrapper[4698]: I0127 15:46:09.722117 4698 generic.go:334] "Generic (PLEG): container finished" podID="32a33d64-948b-411f-b76d-87d24b867ece" containerID="62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7" exitCode=0 Jan 27 15:46:09 crc kubenswrapper[4698]: I0127 15:46:09.722227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerDied","Data":"62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7"} Jan 27 15:46:09 crc kubenswrapper[4698]: I0127 15:46:09.722755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerStarted","Data":"52c5b21f07f6090b586044ea32205f62a3bd955515c805fe26977d6731fb7828"} Jan 27 15:46:10 crc kubenswrapper[4698]: I0127 15:46:10.736368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerStarted","Data":"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e"} Jan 27 15:46:11 crc kubenswrapper[4698]: I0127 15:46:11.746745 4698 generic.go:334] "Generic (PLEG): container finished" podID="32a33d64-948b-411f-b76d-87d24b867ece" containerID="a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e" exitCode=0 Jan 27 15:46:11 crc kubenswrapper[4698]: I0127 15:46:11.746821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerDied","Data":"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e"} Jan 27 15:46:12 crc kubenswrapper[4698]: I0127 15:46:12.757116 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerStarted","Data":"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9"} Jan 27 15:46:12 crc kubenswrapper[4698]: I0127 15:46:12.780161 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-czp7w" podStartSLOduration=2.228185219 podStartE2EDuration="4.780142708s" podCreationTimestamp="2026-01-27 15:46:08 +0000 UTC" firstStartedPulling="2026-01-27 15:46:09.723912322 +0000 UTC m=+4625.400689787" lastFinishedPulling="2026-01-27 15:46:12.275869811 +0000 UTC m=+4627.952647276" observedRunningTime="2026-01-27 15:46:12.774993501 +0000 UTC m=+4628.451770966" watchObservedRunningTime="2026-01-27 15:46:12.780142708 +0000 UTC m=+4628.456920163" Jan 27 15:46:18 crc kubenswrapper[4698]: I0127 15:46:18.821887 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:18 crc kubenswrapper[4698]: I0127 15:46:18.822189 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:18 crc kubenswrapper[4698]: I0127 15:46:18.876300 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:19 crc kubenswrapper[4698]: I0127 15:46:19.875469 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:19 crc kubenswrapper[4698]: I0127 15:46:19.922488 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:19 crc kubenswrapper[4698]: I0127 15:46:19.992766 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:46:19 crc kubenswrapper[4698]: E0127 15:46:19.993128 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:46:21 crc kubenswrapper[4698]: I0127 15:46:21.839993 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-czp7w" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="registry-server" containerID="cri-o://71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9" gracePeriod=2 Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.305759 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.370149 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content\") pod \"32a33d64-948b-411f-b76d-87d24b867ece\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.370402 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44t52\" (UniqueName: \"kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52\") pod \"32a33d64-948b-411f-b76d-87d24b867ece\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.370440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities\") pod \"32a33d64-948b-411f-b76d-87d24b867ece\" (UID: \"32a33d64-948b-411f-b76d-87d24b867ece\") " Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.371137 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities" (OuterVolumeSpecName: "utilities") pod "32a33d64-948b-411f-b76d-87d24b867ece" (UID: "32a33d64-948b-411f-b76d-87d24b867ece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.413245 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52" (OuterVolumeSpecName: "kube-api-access-44t52") pod "32a33d64-948b-411f-b76d-87d24b867ece" (UID: "32a33d64-948b-411f-b76d-87d24b867ece"). InnerVolumeSpecName "kube-api-access-44t52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.413936 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32a33d64-948b-411f-b76d-87d24b867ece" (UID: "32a33d64-948b-411f-b76d-87d24b867ece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.473771 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44t52\" (UniqueName: \"kubernetes.io/projected/32a33d64-948b-411f-b76d-87d24b867ece-kube-api-access-44t52\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.473824 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.473840 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a33d64-948b-411f-b76d-87d24b867ece-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.850021 4698 generic.go:334] "Generic (PLEG): container finished" podID="32a33d64-948b-411f-b76d-87d24b867ece" containerID="71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9" exitCode=0 Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.850074 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerDied","Data":"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9"} Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.850127 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czp7w" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.850159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czp7w" event={"ID":"32a33d64-948b-411f-b76d-87d24b867ece","Type":"ContainerDied","Data":"52c5b21f07f6090b586044ea32205f62a3bd955515c805fe26977d6731fb7828"} Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.850197 4698 scope.go:117] "RemoveContainer" containerID="71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.871560 4698 scope.go:117] "RemoveContainer" containerID="a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.885553 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.894243 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-czp7w"] Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.902245 4698 scope.go:117] "RemoveContainer" containerID="62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.942080 4698 scope.go:117] "RemoveContainer" containerID="71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9" Jan 27 15:46:22 crc kubenswrapper[4698]: E0127 15:46:22.942432 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9\": container with ID starting with 71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9 not found: ID does not exist" containerID="71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.942464 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9"} err="failed to get container status \"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9\": rpc error: code = NotFound desc = could not find container \"71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9\": container with ID starting with 71c2c61a88fd6c6d330cd211bd3dd8ac953b82b5ab135ded48b576ba946321b9 not found: ID does not exist" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.942489 4698 scope.go:117] "RemoveContainer" containerID="a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e" Jan 27 15:46:22 crc kubenswrapper[4698]: E0127 15:46:22.942876 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e\": container with ID starting with a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e not found: ID does not exist" containerID="a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.942906 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e"} err="failed to get container status \"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e\": rpc error: code = NotFound desc = could not find container \"a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e\": container with ID starting with a19756f248f3e70c6e3ec819ebd5b6d2b3eb42752333df773f382d7f0c42427e not found: ID does not exist" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.942922 4698 scope.go:117] "RemoveContainer" containerID="62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7" Jan 27 15:46:22 crc kubenswrapper[4698]: E0127 15:46:22.943183 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7\": container with ID starting with 62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7 not found: ID does not exist" containerID="62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7" Jan 27 15:46:22 crc kubenswrapper[4698]: I0127 15:46:22.943206 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7"} err="failed to get container status \"62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7\": rpc error: code = NotFound desc = could not find container \"62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7\": container with ID starting with 62711d4a480bdd000ab36e0a191054e80c39635a243e564ee5ed69a20ee532d7 not found: ID does not exist" Jan 27 15:46:23 crc kubenswrapper[4698]: I0127 15:46:23.002820 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a33d64-948b-411f-b76d-87d24b867ece" path="/var/lib/kubelet/pods/32a33d64-948b-411f-b76d-87d24b867ece/volumes" Jan 27 15:46:31 crc kubenswrapper[4698]: I0127 15:46:31.992308 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:46:31 crc kubenswrapper[4698]: E0127 15:46:31.993183 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:46:45 crc kubenswrapper[4698]: I0127 15:46:45.001582 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:46:45 crc kubenswrapper[4698]: E0127 15:46:45.002411 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:46:56 crc kubenswrapper[4698]: I0127 15:46:56.993505 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:46:56 crc kubenswrapper[4698]: E0127 15:46:56.994484 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:47:10 crc kubenswrapper[4698]: I0127 15:47:10.992870 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:47:10 crc kubenswrapper[4698]: E0127 15:47:10.995051 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:47:24 crc kubenswrapper[4698]: I0127 15:47:24.997785 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:47:24 crc kubenswrapper[4698]: E0127 15:47:24.998573 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:47:36 crc kubenswrapper[4698]: I0127 15:47:36.993542 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:47:37 crc kubenswrapper[4698]: I0127 15:47:37.477934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216"} Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.109432 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:27 crc kubenswrapper[4698]: E0127 15:48:27.110419 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="registry-server" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.110433 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="registry-server" Jan 27 15:48:27 crc kubenswrapper[4698]: E0127 15:48:27.110452 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="extract-content" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.110458 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="extract-content" Jan 27 15:48:27 crc kubenswrapper[4698]: E0127 15:48:27.110493 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="extract-utilities" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.110500 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="extract-utilities" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.110679 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a33d64-948b-411f-b76d-87d24b867ece" containerName="registry-server" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.112168 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.125518 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.185359 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.185561 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.185656 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4f5\" (UniqueName: \"kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.288634 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.288783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.288815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4f5\" (UniqueName: \"kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.289561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.289630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.317405 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4f5\" (UniqueName: \"kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5\") pod \"redhat-marketplace-98tlh\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.441093 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:27 crc kubenswrapper[4698]: I0127 15:48:27.972924 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:27 crc kubenswrapper[4698]: W0127 15:48:27.978809 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6942f97c_69fa_490e_9b2c_2e83eb1bc0f6.slice/crio-79f086ae1fbd3d14fb1539d740331e1c6442ed6746c0e560e95343465a384af2 WatchSource:0}: Error finding container 79f086ae1fbd3d14fb1539d740331e1c6442ed6746c0e560e95343465a384af2: Status 404 returned error can't find the container with id 79f086ae1fbd3d14fb1539d740331e1c6442ed6746c0e560e95343465a384af2 Jan 27 15:48:28 crc kubenswrapper[4698]: I0127 15:48:28.953210 4698 generic.go:334] "Generic (PLEG): container finished" podID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerID="fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106" exitCode=0 Jan 27 15:48:28 crc kubenswrapper[4698]: I0127 15:48:28.953333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerDied","Data":"fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106"} Jan 27 15:48:28 crc kubenswrapper[4698]: I0127 15:48:28.953533 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerStarted","Data":"79f086ae1fbd3d14fb1539d740331e1c6442ed6746c0e560e95343465a384af2"} Jan 27 15:48:28 crc kubenswrapper[4698]: I0127 15:48:28.955821 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:48:30 crc kubenswrapper[4698]: I0127 15:48:30.974584 4698 generic.go:334] "Generic (PLEG): container finished" podID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerID="2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6" exitCode=0 Jan 27 15:48:30 crc kubenswrapper[4698]: I0127 15:48:30.974681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerDied","Data":"2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6"} Jan 27 15:48:31 crc kubenswrapper[4698]: I0127 15:48:31.987216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerStarted","Data":"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f"} Jan 27 15:48:32 crc kubenswrapper[4698]: I0127 15:48:32.011219 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98tlh" podStartSLOduration=2.393573028 podStartE2EDuration="5.011193959s" podCreationTimestamp="2026-01-27 15:48:27 +0000 UTC" firstStartedPulling="2026-01-27 15:48:28.955433026 +0000 UTC m=+4764.632210511" lastFinishedPulling="2026-01-27 15:48:31.573053967 +0000 UTC m=+4767.249831442" observedRunningTime="2026-01-27 15:48:32.003225918 +0000 UTC m=+4767.680003383" watchObservedRunningTime="2026-01-27 15:48:32.011193959 +0000 UTC m=+4767.687971424" Jan 27 15:48:37 crc kubenswrapper[4698]: I0127 15:48:37.441918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:37 crc kubenswrapper[4698]: I0127 15:48:37.442481 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:37 crc kubenswrapper[4698]: I0127 15:48:37.501100 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:38 crc kubenswrapper[4698]: I0127 15:48:38.078286 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:38 crc kubenswrapper[4698]: I0127 15:48:38.124819 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.053387 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98tlh" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="registry-server" containerID="cri-o://deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f" gracePeriod=2 Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.505090 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.707200 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content\") pod \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.707486 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc4f5\" (UniqueName: \"kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5\") pod \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.707657 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities\") pod \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\" (UID: \"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6\") " Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.708568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities" (OuterVolumeSpecName: "utilities") pod "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" (UID: "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.714380 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5" (OuterVolumeSpecName: "kube-api-access-tc4f5") pod "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" (UID: "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6"). InnerVolumeSpecName "kube-api-access-tc4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.800062 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" (UID: "6942f97c-69fa-490e-9b2c-2e83eb1bc0f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.810378 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.810449 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:40 crc kubenswrapper[4698]: I0127 15:48:40.810466 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc4f5\" (UniqueName: \"kubernetes.io/projected/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6-kube-api-access-tc4f5\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.067828 4698 generic.go:334] "Generic (PLEG): container finished" podID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerID="deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f" exitCode=0 Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.068345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerDied","Data":"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f"} Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.068424 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98tlh" event={"ID":"6942f97c-69fa-490e-9b2c-2e83eb1bc0f6","Type":"ContainerDied","Data":"79f086ae1fbd3d14fb1539d740331e1c6442ed6746c0e560e95343465a384af2"} Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.068454 4698 scope.go:117] "RemoveContainer" containerID="deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.068467 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98tlh" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.704093 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.707545 4698 scope.go:117] "RemoveContainer" containerID="2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.715056 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98tlh"] Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.729032 4698 scope.go:117] "RemoveContainer" containerID="fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.773570 4698 scope.go:117] "RemoveContainer" containerID="deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f" Jan 27 15:48:41 crc kubenswrapper[4698]: E0127 15:48:41.774459 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f\": container with ID starting with deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f not found: ID does not exist" containerID="deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.774515 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f"} err="failed to get container status \"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f\": rpc error: code = NotFound desc = could not find container \"deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f\": container with ID starting with deb1c4a4b7eb0373f193420331c3f5c7b071a31dc860a47a8a301c89561fb13f not found: ID does not exist" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.774542 4698 scope.go:117] "RemoveContainer" containerID="2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6" Jan 27 15:48:41 crc kubenswrapper[4698]: E0127 15:48:41.775181 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6\": container with ID starting with 2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6 not found: ID does not exist" containerID="2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.775217 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6"} err="failed to get container status \"2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6\": rpc error: code = NotFound desc = could not find container \"2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6\": container with ID starting with 2005ac7ec81c84b099b2d2ee0468ea761b2a3134ad9105ceb6769dbd977cecc6 not found: ID does not exist" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.775230 4698 scope.go:117] "RemoveContainer" containerID="fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106" Jan 27 15:48:41 crc kubenswrapper[4698]: E0127 15:48:41.775492 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106\": container with ID starting with fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106 not found: ID does not exist" containerID="fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106" Jan 27 15:48:41 crc kubenswrapper[4698]: I0127 15:48:41.775515 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106"} err="failed to get container status \"fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106\": rpc error: code = NotFound desc = could not find container \"fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106\": container with ID starting with fcd40ad24edf3c288691586468afcbf66fadcff0b93628de623b23ad9048d106 not found: ID does not exist" Jan 27 15:48:43 crc kubenswrapper[4698]: I0127 15:48:43.003988 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" path="/var/lib/kubelet/pods/6942f97c-69fa-490e-9b2c-2e83eb1bc0f6/volumes" Jan 27 15:49:11 crc kubenswrapper[4698]: I0127 15:49:11.680276 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod6942f97c-69fa-490e-9b2c-2e83eb1bc0f6"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod6942f97c-69fa-490e-9b2c-2e83eb1bc0f6] : Timed out while waiting for systemd to remove kubepods-burstable-pod6942f97c_69fa_490e_9b2c_2e83eb1bc0f6.slice" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.048888 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwdch/must-gather-22hgz"] Jan 27 15:49:30 crc kubenswrapper[4698]: E0127 15:49:30.049828 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="extract-utilities" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.049841 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="extract-utilities" Jan 27 15:49:30 crc kubenswrapper[4698]: E0127 15:49:30.049874 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="extract-content" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.049880 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="extract-content" Jan 27 15:49:30 crc kubenswrapper[4698]: E0127 15:49:30.049892 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="registry-server" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.049898 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="registry-server" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.050066 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6942f97c-69fa-490e-9b2c-2e83eb1bc0f6" containerName="registry-server" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.051077 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.053805 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wwdch"/"openshift-service-ca.crt" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.054460 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wwdch"/"default-dockercfg-9mg7k" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.054598 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wwdch"/"kube-root-ca.crt" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.057632 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwdch/must-gather-22hgz"] Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.130337 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbk7\" (UniqueName: \"kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.130859 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.233069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.233179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbk7\" (UniqueName: \"kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.233563 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.263670 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbk7\" (UniqueName: \"kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7\") pod \"must-gather-22hgz\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.370843 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:49:30 crc kubenswrapper[4698]: I0127 15:49:30.669478 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwdch/must-gather-22hgz"] Jan 27 15:49:31 crc kubenswrapper[4698]: I0127 15:49:31.500925 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/must-gather-22hgz" event={"ID":"62d864f5-c05f-4005-b941-24bf347a9068","Type":"ContainerStarted","Data":"1b508e992da138108a1e38ad05234245513ef9a983bc893c63b6987764fc7370"} Jan 27 15:49:38 crc kubenswrapper[4698]: I0127 15:49:38.585504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/must-gather-22hgz" event={"ID":"62d864f5-c05f-4005-b941-24bf347a9068","Type":"ContainerStarted","Data":"15046951f8492415ea2c9e4f0b3a60deb49c2a9e5ddb27a7b4c404b44d5fad8b"} Jan 27 15:49:38 crc kubenswrapper[4698]: I0127 15:49:38.586504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/must-gather-22hgz" event={"ID":"62d864f5-c05f-4005-b941-24bf347a9068","Type":"ContainerStarted","Data":"ff88b02718a53830f4561648e9c2b7cbb7c7acb1647dbd03723304e934131131"} Jan 27 15:49:38 crc kubenswrapper[4698]: I0127 15:49:38.610844 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wwdch/must-gather-22hgz" podStartSLOduration=1.350455047 podStartE2EDuration="8.61082545s" podCreationTimestamp="2026-01-27 15:49:30 +0000 UTC" firstStartedPulling="2026-01-27 15:49:30.674899846 +0000 UTC m=+4826.351677311" lastFinishedPulling="2026-01-27 15:49:37.935270249 +0000 UTC m=+4833.612047714" observedRunningTime="2026-01-27 15:49:38.605444097 +0000 UTC m=+4834.282221562" watchObservedRunningTime="2026-01-27 15:49:38.61082545 +0000 UTC m=+4834.287602915" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.792174 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwdch/crc-debug-zh9ht"] Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.794553 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.840577 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9bs\" (UniqueName: \"kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.840630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.942685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9bs\" (UniqueName: \"kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.942744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.942928 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:43 crc kubenswrapper[4698]: I0127 15:49:43.963081 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9bs\" (UniqueName: \"kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs\") pod \"crc-debug-zh9ht\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:44 crc kubenswrapper[4698]: I0127 15:49:44.118545 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:49:44 crc kubenswrapper[4698]: W0127 15:49:44.152570 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ffe86c_cddb_45f4_953d_18ff7e15d932.slice/crio-e61e93fdcad3f676915e549dd13104f335dc724fc454d00dc3a018e4f7cf592e WatchSource:0}: Error finding container e61e93fdcad3f676915e549dd13104f335dc724fc454d00dc3a018e4f7cf592e: Status 404 returned error can't find the container with id e61e93fdcad3f676915e549dd13104f335dc724fc454d00dc3a018e4f7cf592e Jan 27 15:49:44 crc kubenswrapper[4698]: I0127 15:49:44.635090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" event={"ID":"a9ffe86c-cddb-45f4-953d-18ff7e15d932","Type":"ContainerStarted","Data":"e61e93fdcad3f676915e549dd13104f335dc724fc454d00dc3a018e4f7cf592e"} Jan 27 15:49:57 crc kubenswrapper[4698]: I0127 15:49:57.451831 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:49:57 crc kubenswrapper[4698]: I0127 15:49:57.452393 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4698]: I0127 15:49:58.758194 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" event={"ID":"a9ffe86c-cddb-45f4-953d-18ff7e15d932","Type":"ContainerStarted","Data":"b13b938b3449b773770a246601fc179695411d06426d968ad3b2c060cabd53ff"} Jan 27 15:49:58 crc kubenswrapper[4698]: I0127 15:49:58.787788 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" podStartSLOduration=2.703768651 podStartE2EDuration="15.787762862s" podCreationTimestamp="2026-01-27 15:49:43 +0000 UTC" firstStartedPulling="2026-01-27 15:49:44.154690697 +0000 UTC m=+4839.831468162" lastFinishedPulling="2026-01-27 15:49:57.238684908 +0000 UTC m=+4852.915462373" observedRunningTime="2026-01-27 15:49:58.774522261 +0000 UTC m=+4854.451299756" watchObservedRunningTime="2026-01-27 15:49:58.787762862 +0000 UTC m=+4854.464540327" Jan 27 15:50:20 crc kubenswrapper[4698]: I0127 15:50:20.007828 4698 generic.go:334] "Generic (PLEG): container finished" podID="a9ffe86c-cddb-45f4-953d-18ff7e15d932" containerID="b13b938b3449b773770a246601fc179695411d06426d968ad3b2c060cabd53ff" exitCode=0 Jan 27 15:50:20 crc kubenswrapper[4698]: I0127 15:50:20.007841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" event={"ID":"a9ffe86c-cddb-45f4-953d-18ff7e15d932","Type":"ContainerDied","Data":"b13b938b3449b773770a246601fc179695411d06426d968ad3b2c060cabd53ff"} Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.132677 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.166896 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwdch/crc-debug-zh9ht"] Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.174854 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwdch/crc-debug-zh9ht"] Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.233886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9bs\" (UniqueName: \"kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs\") pod \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.234218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host\") pod \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\" (UID: \"a9ffe86c-cddb-45f4-953d-18ff7e15d932\") " Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.234310 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host" (OuterVolumeSpecName: "host") pod "a9ffe86c-cddb-45f4-953d-18ff7e15d932" (UID: "a9ffe86c-cddb-45f4-953d-18ff7e15d932"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.234958 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9ffe86c-cddb-45f4-953d-18ff7e15d932-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.241285 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs" (OuterVolumeSpecName: "kube-api-access-mf9bs") pod "a9ffe86c-cddb-45f4-953d-18ff7e15d932" (UID: "a9ffe86c-cddb-45f4-953d-18ff7e15d932"). InnerVolumeSpecName "kube-api-access-mf9bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:50:21 crc kubenswrapper[4698]: I0127 15:50:21.336838 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9bs\" (UniqueName: \"kubernetes.io/projected/a9ffe86c-cddb-45f4-953d-18ff7e15d932-kube-api-access-mf9bs\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.027747 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61e93fdcad3f676915e549dd13104f335dc724fc454d00dc3a018e4f7cf592e" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.027809 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-zh9ht" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.332521 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwdch/crc-debug-r6x4b"] Jan 27 15:50:22 crc kubenswrapper[4698]: E0127 15:50:22.332908 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ffe86c-cddb-45f4-953d-18ff7e15d932" containerName="container-00" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.332921 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ffe86c-cddb-45f4-953d-18ff7e15d932" containerName="container-00" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.333157 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ffe86c-cddb-45f4-953d-18ff7e15d932" containerName="container-00" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.333830 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.457084 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jr8\" (UniqueName: \"kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.457328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.559245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.559378 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jr8\" (UniqueName: \"kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.559434 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.815691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jr8\" (UniqueName: \"kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8\") pod \"crc-debug-r6x4b\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:22 crc kubenswrapper[4698]: I0127 15:50:22.949976 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:23 crc kubenswrapper[4698]: I0127 15:50:23.011238 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ffe86c-cddb-45f4-953d-18ff7e15d932" path="/var/lib/kubelet/pods/a9ffe86c-cddb-45f4-953d-18ff7e15d932/volumes" Jan 27 15:50:23 crc kubenswrapper[4698]: I0127 15:50:23.038718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" event={"ID":"d7811e77-f37b-44a5-ad0c-c7b318d44bc4","Type":"ContainerStarted","Data":"89c683f6efc027d9c88f450dfc4dfb4b95c871cdf1dc212d6636fa4d5aa36374"} Jan 27 15:50:24 crc kubenswrapper[4698]: I0127 15:50:24.048796 4698 generic.go:334] "Generic (PLEG): container finished" podID="d7811e77-f37b-44a5-ad0c-c7b318d44bc4" containerID="27dbb7af0fdd1493bbe5c071a491c0ebe54ef349ea98933fbae4a7a8f3130d1e" exitCode=1 Jan 27 15:50:24 crc kubenswrapper[4698]: I0127 15:50:24.048824 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" event={"ID":"d7811e77-f37b-44a5-ad0c-c7b318d44bc4","Type":"ContainerDied","Data":"27dbb7af0fdd1493bbe5c071a491c0ebe54ef349ea98933fbae4a7a8f3130d1e"} Jan 27 15:50:24 crc kubenswrapper[4698]: I0127 15:50:24.089731 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwdch/crc-debug-r6x4b"] Jan 27 15:50:24 crc kubenswrapper[4698]: I0127 15:50:24.100570 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwdch/crc-debug-r6x4b"] Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.168513 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.214786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jr8\" (UniqueName: \"kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8\") pod \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.215010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host\") pod \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\" (UID: \"d7811e77-f37b-44a5-ad0c-c7b318d44bc4\") " Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.215152 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host" (OuterVolumeSpecName: "host") pod "d7811e77-f37b-44a5-ad0c-c7b318d44bc4" (UID: "d7811e77-f37b-44a5-ad0c-c7b318d44bc4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.215569 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.228877 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8" (OuterVolumeSpecName: "kube-api-access-65jr8") pod "d7811e77-f37b-44a5-ad0c-c7b318d44bc4" (UID: "d7811e77-f37b-44a5-ad0c-c7b318d44bc4"). InnerVolumeSpecName "kube-api-access-65jr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:50:25 crc kubenswrapper[4698]: I0127 15:50:25.318027 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jr8\" (UniqueName: \"kubernetes.io/projected/d7811e77-f37b-44a5-ad0c-c7b318d44bc4-kube-api-access-65jr8\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:26 crc kubenswrapper[4698]: I0127 15:50:26.068387 4698 scope.go:117] "RemoveContainer" containerID="27dbb7af0fdd1493bbe5c071a491c0ebe54ef349ea98933fbae4a7a8f3130d1e" Jan 27 15:50:26 crc kubenswrapper[4698]: I0127 15:50:26.068471 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/crc-debug-r6x4b" Jan 27 15:50:27 crc kubenswrapper[4698]: I0127 15:50:27.003507 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7811e77-f37b-44a5-ad0c-c7b318d44bc4" path="/var/lib/kubelet/pods/d7811e77-f37b-44a5-ad0c-c7b318d44bc4/volumes" Jan 27 15:50:27 crc kubenswrapper[4698]: I0127 15:50:27.452376 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:50:27 crc kubenswrapper[4698]: I0127 15:50:27.452468 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:57 crc kubenswrapper[4698]: I0127 15:50:57.451519 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:50:57 crc kubenswrapper[4698]: I0127 15:50:57.452419 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:57 crc kubenswrapper[4698]: I0127 15:50:57.452485 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:50:57 crc kubenswrapper[4698]: I0127 15:50:57.454232 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:50:57 crc kubenswrapper[4698]: I0127 15:50:57.454295 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216" gracePeriod=600 Jan 27 15:50:58 crc kubenswrapper[4698]: I0127 15:50:58.360919 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216" exitCode=0 Jan 27 15:50:58 crc kubenswrapper[4698]: I0127 15:50:58.361860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216"} Jan 27 15:50:58 crc kubenswrapper[4698]: I0127 15:50:58.361909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d"} Jan 27 15:50:58 crc kubenswrapper[4698]: I0127 15:50:58.361936 4698 scope.go:117] "RemoveContainer" containerID="2b99e70ad64def6e0bae90fd33324e0259109fc31b002f7860f0218818ee6a4e" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.893407 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:08 crc kubenswrapper[4698]: E0127 15:51:08.894445 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7811e77-f37b-44a5-ad0c-c7b318d44bc4" containerName="container-00" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.894471 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7811e77-f37b-44a5-ad0c-c7b318d44bc4" containerName="container-00" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.894720 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7811e77-f37b-44a5-ad0c-c7b318d44bc4" containerName="container-00" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.896135 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.912392 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.947721 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.947851 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5qs\" (UniqueName: \"kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:08 crc kubenswrapper[4698]: I0127 15:51:08.947928 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.049970 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.050058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.050156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5qs\" (UniqueName: \"kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.050655 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.050679 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.078898 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5qs\" (UniqueName: \"kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs\") pod \"community-operators-jmx64\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.222159 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:09 crc kubenswrapper[4698]: I0127 15:51:09.830357 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.257061 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bcc7f5b5b-nhf4c_92693e85-5559-4e51-8da7-b0ca1780cff8/barbican-api/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.378227 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bcc7f5b5b-nhf4c_92693e85-5559-4e51-8da7-b0ca1780cff8/barbican-api-log/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.491446 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b67df786-qljcv_aac0eb10-05fd-4d02-84f0-9c34458ef3ad/barbican-keystone-listener/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.532076 4698 generic.go:334] "Generic (PLEG): container finished" podID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerID="f182ac8c57b913fd9dd9d9537a85a6e35f6bae466db5e6375d78d50b0019ce0a" exitCode=0 Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.532369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerDied","Data":"f182ac8c57b913fd9dd9d9537a85a6e35f6bae466db5e6375d78d50b0019ce0a"} Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.532840 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerStarted","Data":"9f2ae05e43e5f6bdf1dafd1b2ba0065d5febec3f04b23ad4216bd29f8149a28a"} Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.549524 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b67df786-qljcv_aac0eb10-05fd-4d02-84f0-9c34458ef3ad/barbican-keystone-listener-log/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.642126 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86766cdbdc-v9752_62afb2bc-2a78-4012-9287-cd5812694245/barbican-worker/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.709964 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86766cdbdc-v9752_62afb2bc-2a78-4012-9287-cd5812694245/barbican-worker-log/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.865516 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c5569e41-49e9-4044-b173-babb897afb4f/ceilometer-notification-agent/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.886422 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c5569e41-49e9-4044-b173-babb897afb4f/ceilometer-central-agent/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.964371 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c5569e41-49e9-4044-b173-babb897afb4f/proxy-httpd/0.log" Jan 27 15:51:10 crc kubenswrapper[4698]: I0127 15:51:10.991526 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c5569e41-49e9-4044-b173-babb897afb4f/sg-core/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.211664 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2b66b3ef-b534-4fc7-ab88-7d6d6d971f26/cinder-api/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.406405 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2b66b3ef-b534-4fc7-ab88-7d6d6d971f26/cinder-api-log/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.557242 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b295ba4f-27e2-4785-82ae-f266f9346576/probe/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.605068 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b295ba4f-27e2-4785-82ae-f266f9346576/cinder-scheduler/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.712389 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-84f9c77fd5-xrjct_ad4426b9-0d4e-4a48-8f7a-fdb0febd44da/init/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.919318 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-84f9c77fd5-xrjct_ad4426b9-0d4e-4a48-8f7a-fdb0febd44da/dnsmasq-dns/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.920882 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-84f9c77fd5-xrjct_ad4426b9-0d4e-4a48-8f7a-fdb0febd44da/init/0.log" Jan 27 15:51:11 crc kubenswrapper[4698]: I0127 15:51:11.949926 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19f81344-f620-4556-a605-8b6d26805b77/glance-httpd/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.130414 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19f81344-f620-4556-a605-8b6d26805b77/glance-log/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.159553 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab/glance-httpd/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.197031 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c04ef4e5-d2d2-490a-a9b4-4c4a2ad10eab/glance-log/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.528038 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-54778bbf88-5qkzn_f9249911-c670-4cc4-895b-8c3a15d90d6f/horizon/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.558321 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerStarted","Data":"7111ff27051eef1c1794e0f4706ff92588e150083d87ef49e29b03d5c234b720"} Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.799504 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-646ddbcdff-wcvmz_6b7d39fa-1512-4cbd-a3b0-1169b55e8e61/keystone-api/0.log" Jan 27 15:51:12 crc kubenswrapper[4698]: I0127 15:51:12.828522 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492101-f6x5f_0034b4e9-4bc5-48cb-8fcc-f98858f0fe15/keystone-cron/0.log" Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.058008 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f7a222d8-9b89-4da5-919a-cbe5f3ecfd33/kube-state-metrics/0.log" Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.084067 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-54778bbf88-5qkzn_f9249911-c670-4cc4-895b-8c3a15d90d6f/horizon-log/0.log" Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.370500 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57bdd5f-5p47q_c64faec6-26c1-4556-bcfb-707840ac0863/neutron-httpd/0.log" Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.440412 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57bdd5f-5p47q_c64faec6-26c1-4556-bcfb-707840ac0863/neutron-api/0.log" Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.568304 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerDied","Data":"7111ff27051eef1c1794e0f4706ff92588e150083d87ef49e29b03d5c234b720"} Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.569185 4698 generic.go:334] "Generic (PLEG): container finished" podID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerID="7111ff27051eef1c1794e0f4706ff92588e150083d87ef49e29b03d5c234b720" exitCode=0 Jan 27 15:51:13 crc kubenswrapper[4698]: I0127 15:51:13.900009 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b758775a-939b-4630-9737-476f4ff9073d/nova-api-log/0.log" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.162542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b758775a-939b-4630-9737-476f4ff9073d/nova-api-api/0.log" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.343758 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f89e0e10-15e5-43ca-95ff-2eb2e82dd4f7/nova-cell0-conductor-conductor/0.log" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.459610 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9dd428b1-641b-4e2a-a0cc-72629e7e091b/nova-cell1-conductor-conductor/0.log" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.581217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerStarted","Data":"3e288b41d322b0922c143f567b3bd89106b255933991954e39388a34d03d25b9"} Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.586107 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5b474dc0-6d40-42e8-9821-da0aa930095e/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.609833 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmx64" podStartSLOduration=3.137404875 podStartE2EDuration="6.609812491s" podCreationTimestamp="2026-01-27 15:51:08 +0000 UTC" firstStartedPulling="2026-01-27 15:51:10.534965154 +0000 UTC m=+4926.211742619" lastFinishedPulling="2026-01-27 15:51:14.00737277 +0000 UTC m=+4929.684150235" observedRunningTime="2026-01-27 15:51:14.599429446 +0000 UTC m=+4930.276206921" watchObservedRunningTime="2026-01-27 15:51:14.609812491 +0000 UTC m=+4930.286589966" Jan 27 15:51:14 crc kubenswrapper[4698]: I0127 15:51:14.798216 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d4c24ac0-f402-431e-ba0f-677ca5b9f97a/nova-metadata-log/0.log" Jan 27 15:51:15 crc kubenswrapper[4698]: I0127 15:51:15.833110 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a30e344d-b5c4-40f6-8bdb-7af9c1df7449/mysql-bootstrap/0.log" Jan 27 15:51:15 crc kubenswrapper[4698]: I0127 15:51:15.897373 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e1be438a-a626-403f-ac66-55b2a78f44fe/nova-scheduler-scheduler/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.066443 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a30e344d-b5c4-40f6-8bdb-7af9c1df7449/mysql-bootstrap/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.126575 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a30e344d-b5c4-40f6-8bdb-7af9c1df7449/galera/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.256771 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9374a29e-348e-43ec-9321-b0a13aeb6c4b/mysql-bootstrap/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.380004 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d4c24ac0-f402-431e-ba0f-677ca5b9f97a/nova-metadata-metadata/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.504012 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9374a29e-348e-43ec-9321-b0a13aeb6c4b/mysql-bootstrap/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.540588 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9374a29e-348e-43ec-9321-b0a13aeb6c4b/galera/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.604105 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a11616dd-8398-4c71-829f-1a389df9495f/openstackclient/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.806486 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7swbz_f9e1f6c1-39e5-4d7a-ac6a-da53c6fe143b/ovn-controller/0.log" Jan 27 15:51:16 crc kubenswrapper[4698]: I0127 15:51:16.895552 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6zb85_487e1754-14d1-494d-97fd-495520f0c8e0/openstack-network-exporter/0.log" Jan 27 15:51:17 crc kubenswrapper[4698]: I0127 15:51:17.636678 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cn5z6_ca935cab-9c0b-4b5c-9754-f5bafb3a0037/ovsdb-server-init/0.log" Jan 27 15:51:17 crc kubenswrapper[4698]: I0127 15:51:17.826230 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cn5z6_ca935cab-9c0b-4b5c-9754-f5bafb3a0037/ovs-vswitchd/0.log" Jan 27 15:51:17 crc kubenswrapper[4698]: I0127 15:51:17.875285 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cn5z6_ca935cab-9c0b-4b5c-9754-f5bafb3a0037/ovsdb-server-init/0.log" Jan 27 15:51:17 crc kubenswrapper[4698]: I0127 15:51:17.910689 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cn5z6_ca935cab-9c0b-4b5c-9754-f5bafb3a0037/ovsdb-server/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.013689 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1fbe86d1-6225-4de5-81a1-9222e08bcec5/openstack-network-exporter/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.150904 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1fbe86d1-6225-4de5-81a1-9222e08bcec5/ovn-northd/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.152669 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_35c385ce-4a6e-4a66-b607-89f47e40b6fc/openstack-network-exporter/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.265717 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_35c385ce-4a6e-4a66-b607-89f47e40b6fc/ovsdbserver-nb/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.411944 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74651d0e-02c7-4067-9fc1-eff4c90d33ac/ovsdbserver-sb/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.415388 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74651d0e-02c7-4067-9fc1-eff4c90d33ac/openstack-network-exporter/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.663472 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79c59487f6-d4xj7_cda38994-c355-459e-af24-3fb060e62625/placement-api/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.749250 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79c59487f6-d4xj7_cda38994-c355-459e-af24-3fb060e62625/placement-log/0.log" Jan 27 15:51:18 crc kubenswrapper[4698]: I0127 15:51:18.852424 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_19218e14-04c7-40a9-b2e7-2873e9cdbe82/init-config-reloader/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.054963 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_19218e14-04c7-40a9-b2e7-2873e9cdbe82/prometheus/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.061376 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_19218e14-04c7-40a9-b2e7-2873e9cdbe82/init-config-reloader/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.067950 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_19218e14-04c7-40a9-b2e7-2873e9cdbe82/config-reloader/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.131204 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_19218e14-04c7-40a9-b2e7-2873e9cdbe82/thanos-sidecar/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.222319 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.222444 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.264075 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_764b6b7b-3664-40e6-a24b-dc0f9db827db/setup-container/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.607236 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_764b6b7b-3664-40e6-a24b-dc0f9db827db/setup-container/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.633912 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5d6f607c-3a31-4135-9eb4-3193e722d112/setup-container/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.677298 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_764b6b7b-3664-40e6-a24b-dc0f9db827db/rabbitmq/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.835800 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5d6f607c-3a31-4135-9eb4-3193e722d112/setup-container/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.914172 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5d6f607c-3a31-4135-9eb4-3193e722d112/rabbitmq/0.log" Jan 27 15:51:19 crc kubenswrapper[4698]: I0127 15:51:19.985033 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c686e168-f607-4b7f-a81d-f33ac8bdf513/setup-container/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.209789 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c686e168-f607-4b7f-a81d-f33ac8bdf513/rabbitmq/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.241790 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c686e168-f607-4b7f-a81d-f33ac8bdf513/setup-container/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.274305 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jmx64" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="registry-server" probeResult="failure" output=< Jan 27 15:51:20 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:51:20 crc kubenswrapper[4698]: > Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.386431 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f99cfdc45-gkb5v_296c42dd-3876-4f34-9d1c-f0b1cc1b3303/proxy-httpd/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.448830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f99cfdc45-gkb5v_296c42dd-3876-4f34-9d1c-f0b1cc1b3303/proxy-server/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.485200 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-52ttj_fd96ec0a-9e80-4f7e-b009-b83aaa6e726e/swift-ring-rebalance/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.693416 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/account-auditor/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.710979 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/account-reaper/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.751215 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/account-replicator/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.825934 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/account-server/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.894572 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/container-auditor/0.log" Jan 27 15:51:20 crc kubenswrapper[4698]: I0127 15:51:20.989615 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/container-server/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.046761 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/container-replicator/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.124216 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/container-updater/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.165156 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/object-auditor/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.232737 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/object-expirer/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.327335 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/object-replicator/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.386722 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/object-server/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.412160 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/object-updater/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.495230 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/rsync/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.546055 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f15487de-4580-4abf-a96c-3c5d364fe2d5/swift-recon-cron/0.log" Jan 27 15:51:21 crc kubenswrapper[4698]: I0127 15:51:21.807780 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ac75c7a7-7556-4c40-bace-beafefc7a3cd/watcher-api-log/0.log" Jan 27 15:51:22 crc kubenswrapper[4698]: I0127 15:51:22.111734 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_d3bcf72a-4e77-4609-9796-a712514b59de/watcher-applier/0.log" Jan 27 15:51:22 crc kubenswrapper[4698]: I0127 15:51:22.393479 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_72e26469-ae9a-4fd9-b7ee-bfaaa48b4554/watcher-decision-engine/0.log" Jan 27 15:51:24 crc kubenswrapper[4698]: I0127 15:51:24.494380 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ac75c7a7-7556-4c40-bace-beafefc7a3cd/watcher-api/0.log" Jan 27 15:51:29 crc kubenswrapper[4698]: I0127 15:51:29.283445 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:29 crc kubenswrapper[4698]: I0127 15:51:29.349143 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:29 crc kubenswrapper[4698]: I0127 15:51:29.529313 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:30 crc kubenswrapper[4698]: I0127 15:51:30.719130 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmx64" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="registry-server" containerID="cri-o://3e288b41d322b0922c143f567b3bd89106b255933991954e39388a34d03d25b9" gracePeriod=2 Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.269352 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_589d54fa-234d-41b2-b030-91101d03c978/memcached/0.log" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.731620 4698 generic.go:334] "Generic (PLEG): container finished" podID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerID="3e288b41d322b0922c143f567b3bd89106b255933991954e39388a34d03d25b9" exitCode=0 Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.731904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerDied","Data":"3e288b41d322b0922c143f567b3bd89106b255933991954e39388a34d03d25b9"} Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.881817 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.892955 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content\") pod \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.893087 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg5qs\" (UniqueName: \"kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs\") pod \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.893107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities\") pod \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\" (UID: \"e11802cf-f0e3-4c89-b113-2fd4be5a375e\") " Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.894431 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities" (OuterVolumeSpecName: "utilities") pod "e11802cf-f0e3-4c89-b113-2fd4be5a375e" (UID: "e11802cf-f0e3-4c89-b113-2fd4be5a375e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.901041 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs" (OuterVolumeSpecName: "kube-api-access-rg5qs") pod "e11802cf-f0e3-4c89-b113-2fd4be5a375e" (UID: "e11802cf-f0e3-4c89-b113-2fd4be5a375e"). InnerVolumeSpecName "kube-api-access-rg5qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.965258 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e11802cf-f0e3-4c89-b113-2fd4be5a375e" (UID: "e11802cf-f0e3-4c89-b113-2fd4be5a375e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.994337 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.994369 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg5qs\" (UniqueName: \"kubernetes.io/projected/e11802cf-f0e3-4c89-b113-2fd4be5a375e-kube-api-access-rg5qs\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:31 crc kubenswrapper[4698]: I0127 15:51:31.994379 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e11802cf-f0e3-4c89-b113-2fd4be5a375e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.743890 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmx64" event={"ID":"e11802cf-f0e3-4c89-b113-2fd4be5a375e","Type":"ContainerDied","Data":"9f2ae05e43e5f6bdf1dafd1b2ba0065d5febec3f04b23ad4216bd29f8149a28a"} Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.743979 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmx64" Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.744259 4698 scope.go:117] "RemoveContainer" containerID="3e288b41d322b0922c143f567b3bd89106b255933991954e39388a34d03d25b9" Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.765803 4698 scope.go:117] "RemoveContainer" containerID="7111ff27051eef1c1794e0f4706ff92588e150083d87ef49e29b03d5c234b720" Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.796749 4698 scope.go:117] "RemoveContainer" containerID="f182ac8c57b913fd9dd9d9537a85a6e35f6bae466db5e6375d78d50b0019ce0a" Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.804477 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:32 crc kubenswrapper[4698]: I0127 15:51:32.813173 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmx64"] Jan 27 15:51:33 crc kubenswrapper[4698]: I0127 15:51:33.003755 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" path="/var/lib/kubelet/pods/e11802cf-f0e3-4c89-b113-2fd4be5a375e/volumes" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.035755 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-zjw77_00d7ded4-a39f-4261-8f42-5762a7d28314/manager/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.387341 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/util/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.618030 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/pull/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.620663 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/util/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.669757 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/pull/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.841013 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/pull/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.845783 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/extract/0.log" Jan 27 15:51:53 crc kubenswrapper[4698]: I0127 15:51:53.877366 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cdb0962048ad66ba27a643a7f3792a291ef406947954a495c9d877e856xlnxt_8b62a63b-7862-462d-a67e-864848915728/util/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.115021 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-zz49w_fdf128da-b514-46a4-ba2a-488ed77088c0/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.129093 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-rwj7x_84e6b7df-451a-421d-9128-a73ee95124ca/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.377144 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-l7sxf_cd843e79-28e5-483b-8368-b344b5fc42ed/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.425091 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-bfxhx_d77b4eac-bd81-41be-8a8c-6cb9c61bd242/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.686588 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7b68z_a037e7f8-75bb-4a3a-a60e-e378b79e7a2c/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.912182 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-ppdfb_3f9d43a0-d759-4627-9ac2-d48d281e6daf/manager/0.log" Jan 27 15:51:54 crc kubenswrapper[4698]: I0127 15:51:54.978428 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-t9jb8_095ba028-5504-4533-b759-edaa313a8e80/manager/0.log" Jan 27 15:51:55 crc kubenswrapper[4698]: I0127 15:51:55.119817 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tv9vl_55c2e67b-e60f-4e4f-8322-35cc46986b8c/manager/0.log" Jan 27 15:51:55 crc kubenswrapper[4698]: I0127 15:51:55.195875 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-plttm_68bcfa84-c19a-4686-b103-3164e0733af1/manager/0.log" Jan 27 15:51:55 crc kubenswrapper[4698]: I0127 15:51:55.381667 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-fq777_0b7db176-d2e8-4e0d-b769-e4cc9f1ef32b/manager/0.log" Jan 27 15:51:55 crc kubenswrapper[4698]: I0127 15:51:55.558479 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-g9n9r_f9f70b91-3596-4b3a-92b7-38db144afae1/manager/0.log" Jan 27 15:51:55 crc kubenswrapper[4698]: I0127 15:51:55.674537 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-6mfh4_7bcb0020-f358-4d29-8fb1-78c62d473485/manager/0.log" Jan 27 15:51:56 crc kubenswrapper[4698]: I0127 15:51:56.174893 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-b5zrs_de5834ee-7dcd-4642-a6f6-4c5d04f1f1c3/manager/0.log" Jan 27 15:51:56 crc kubenswrapper[4698]: I0127 15:51:56.187359 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854498hc_6bc80c1e-debd-4c6a-b45d-595c733af1ac/manager/0.log" Jan 27 15:51:56 crc kubenswrapper[4698]: I0127 15:51:56.566202 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7cd9855986-ns8gf_fb90ab87-ea48-4a22-a991-2380fff4d554/operator/0.log" Jan 27 15:51:56 crc kubenswrapper[4698]: I0127 15:51:56.878957 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cj9s9_83931357-c0eb-4337-91da-cf623496c4ef/registry-server/0.log" Jan 27 15:51:57 crc kubenswrapper[4698]: I0127 15:51:57.158662 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-786cc_21f4e075-c740-4e05-a70c-d5e8a14acd45/manager/0.log" Jan 27 15:51:57 crc kubenswrapper[4698]: I0127 15:51:57.342445 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-992bb_2b7b5c45-dace-452f-bb89-08c886ecfe35/manager/0.log" Jan 27 15:51:57 crc kubenswrapper[4698]: I0127 15:51:57.537885 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bfbd85685-ckqkx_86a6bede-0b85-4f92-8e96-5c7c04e5e8dd/manager/0.log" Jan 27 15:51:57 crc kubenswrapper[4698]: I0127 15:51:57.951035 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bm8c6_ee3e8394-b329-49c2-bee1-eb0ba9d4f023/manager/0.log" Jan 27 15:51:57 crc kubenswrapper[4698]: I0127 15:51:57.954075 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zxcbl_5a115396-53db-4c99-80f1-abb7aad7fde5/operator/0.log" Jan 27 15:51:58 crc kubenswrapper[4698]: I0127 15:51:58.173254 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-5v2tj_fdc4f026-5fd3-4519-8d47-aeede547de6d/manager/0.log" Jan 27 15:51:58 crc kubenswrapper[4698]: I0127 15:51:58.289279 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-65d56bd854-4kv98_5dbd886b-472c-41c0-b779-652e4f3121fd/manager/0.log" Jan 27 15:51:58 crc kubenswrapper[4698]: I0127 15:51:58.411554 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4chkq_47971c2b-520a-4088-a172-cc689e975fb9/manager/0.log" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.456063 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:20 crc kubenswrapper[4698]: E0127 15:52:20.459729 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="extract-content" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.459843 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="extract-content" Jan 27 15:52:20 crc kubenswrapper[4698]: E0127 15:52:20.459935 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="extract-utilities" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.460013 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="extract-utilities" Jan 27 15:52:20 crc kubenswrapper[4698]: E0127 15:52:20.460098 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="registry-server" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.460286 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="registry-server" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.460617 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11802cf-f0e3-4c89-b113-2fd4be5a375e" containerName="registry-server" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.462521 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.465435 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.654169 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.654303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9ngc\" (UniqueName: \"kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.654354 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.756468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9ngc\" (UniqueName: \"kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.756825 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.756910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.757315 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.757393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.778747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9ngc\" (UniqueName: \"kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc\") pod \"redhat-operators-g2trc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:20 crc kubenswrapper[4698]: I0127 15:52:20.787047 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:21 crc kubenswrapper[4698]: I0127 15:52:21.400278 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:21 crc kubenswrapper[4698]: I0127 15:52:21.463387 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mswzh_019c0321-025d-4bf5-a48c-fd0e707b797c/control-plane-machine-set-operator/0.log" Jan 27 15:52:21 crc kubenswrapper[4698]: I0127 15:52:21.955241 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qpzns_77a18531-ffc7-42d9-bba7-78d72b032c39/machine-api-operator/0.log" Jan 27 15:52:21 crc kubenswrapper[4698]: I0127 15:52:21.983803 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qpzns_77a18531-ffc7-42d9-bba7-78d72b032c39/kube-rbac-proxy/0.log" Jan 27 15:52:22 crc kubenswrapper[4698]: I0127 15:52:22.179961 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerID="ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc" exitCode=0 Jan 27 15:52:22 crc kubenswrapper[4698]: I0127 15:52:22.180015 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerDied","Data":"ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc"} Jan 27 15:52:22 crc kubenswrapper[4698]: I0127 15:52:22.180046 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerStarted","Data":"7410ed7c6015751861012370d1e60ac066ff4912c9ae284c0537f7a333124192"} Jan 27 15:52:24 crc kubenswrapper[4698]: I0127 15:52:24.198381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerStarted","Data":"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa"} Jan 27 15:52:25 crc kubenswrapper[4698]: I0127 15:52:25.207750 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerID="1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa" exitCode=0 Jan 27 15:52:25 crc kubenswrapper[4698]: I0127 15:52:25.207929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerDied","Data":"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa"} Jan 27 15:52:27 crc kubenswrapper[4698]: I0127 15:52:27.226821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerStarted","Data":"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5"} Jan 27 15:52:27 crc kubenswrapper[4698]: I0127 15:52:27.254915 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2trc" podStartSLOduration=2.620286427 podStartE2EDuration="7.254893193s" podCreationTimestamp="2026-01-27 15:52:20 +0000 UTC" firstStartedPulling="2026-01-27 15:52:22.182072212 +0000 UTC m=+4997.858849677" lastFinishedPulling="2026-01-27 15:52:26.816678988 +0000 UTC m=+5002.493456443" observedRunningTime="2026-01-27 15:52:27.25175565 +0000 UTC m=+5002.928533115" watchObservedRunningTime="2026-01-27 15:52:27.254893193 +0000 UTC m=+5002.931670668" Jan 27 15:52:30 crc kubenswrapper[4698]: I0127 15:52:30.788512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:30 crc kubenswrapper[4698]: I0127 15:52:30.789145 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:31 crc kubenswrapper[4698]: I0127 15:52:31.838670 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2trc" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:52:31 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Jan 27 15:52:31 crc kubenswrapper[4698]: > Jan 27 15:52:37 crc kubenswrapper[4698]: I0127 15:52:37.340578 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xw65b_4a6256b9-f95d-4bee-970e-f903645456ba/cert-manager-controller/0.log" Jan 27 15:52:37 crc kubenswrapper[4698]: I0127 15:52:37.539176 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-h58jw_1c7fda0e-3d43-4d0d-a649-71f9117493c1/cert-manager-cainjector/0.log" Jan 27 15:52:37 crc kubenswrapper[4698]: I0127 15:52:37.677583 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25xhd_7c7b72cb-94ea-407b-b24c-b9b2d7b33be1/cert-manager-webhook/0.log" Jan 27 15:52:40 crc kubenswrapper[4698]: I0127 15:52:40.834602 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:40 crc kubenswrapper[4698]: I0127 15:52:40.888109 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:41 crc kubenswrapper[4698]: I0127 15:52:41.080352 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.366514 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2trc" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="registry-server" containerID="cri-o://73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5" gracePeriod=2 Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.866929 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.984108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content\") pod \"f4975468-de80-48ef-be59-ba4cab2e70bc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.984175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities\") pod \"f4975468-de80-48ef-be59-ba4cab2e70bc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.984248 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9ngc\" (UniqueName: \"kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc\") pod \"f4975468-de80-48ef-be59-ba4cab2e70bc\" (UID: \"f4975468-de80-48ef-be59-ba4cab2e70bc\") " Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.985089 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities" (OuterVolumeSpecName: "utilities") pod "f4975468-de80-48ef-be59-ba4cab2e70bc" (UID: "f4975468-de80-48ef-be59-ba4cab2e70bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.985786 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:42 crc kubenswrapper[4698]: I0127 15:52:42.995550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc" (OuterVolumeSpecName: "kube-api-access-l9ngc") pod "f4975468-de80-48ef-be59-ba4cab2e70bc" (UID: "f4975468-de80-48ef-be59-ba4cab2e70bc"). InnerVolumeSpecName "kube-api-access-l9ngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.087728 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9ngc\" (UniqueName: \"kubernetes.io/projected/f4975468-de80-48ef-be59-ba4cab2e70bc-kube-api-access-l9ngc\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.129987 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4975468-de80-48ef-be59-ba4cab2e70bc" (UID: "f4975468-de80-48ef-be59-ba4cab2e70bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.189806 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4975468-de80-48ef-be59-ba4cab2e70bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.379942 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerID="73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5" exitCode=0 Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.380005 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerDied","Data":"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5"} Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.380048 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2trc" event={"ID":"f4975468-de80-48ef-be59-ba4cab2e70bc","Type":"ContainerDied","Data":"7410ed7c6015751861012370d1e60ac066ff4912c9ae284c0537f7a333124192"} Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.380071 4698 scope.go:117] "RemoveContainer" containerID="73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.380263 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2trc" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.412183 4698 scope.go:117] "RemoveContainer" containerID="1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.415464 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.427327 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2trc"] Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.441658 4698 scope.go:117] "RemoveContainer" containerID="ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.482823 4698 scope.go:117] "RemoveContainer" containerID="73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5" Jan 27 15:52:43 crc kubenswrapper[4698]: E0127 15:52:43.483529 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5\": container with ID starting with 73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5 not found: ID does not exist" containerID="73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.483560 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5"} err="failed to get container status \"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5\": rpc error: code = NotFound desc = could not find container \"73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5\": container with ID starting with 73da87ddcabec6f66b4c9871da6fbae08f158ad64c3649799ec097409f3672b5 not found: ID does not exist" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.483580 4698 scope.go:117] "RemoveContainer" containerID="1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa" Jan 27 15:52:43 crc kubenswrapper[4698]: E0127 15:52:43.483900 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa\": container with ID starting with 1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa not found: ID does not exist" containerID="1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.483920 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa"} err="failed to get container status \"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa\": rpc error: code = NotFound desc = could not find container \"1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa\": container with ID starting with 1f1d2d8fe5fe9faaf93770253341a835a7068a49c17bcb3a7040aa3c2ad00dfa not found: ID does not exist" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.483934 4698 scope.go:117] "RemoveContainer" containerID="ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc" Jan 27 15:52:43 crc kubenswrapper[4698]: E0127 15:52:43.484256 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc\": container with ID starting with ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc not found: ID does not exist" containerID="ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc" Jan 27 15:52:43 crc kubenswrapper[4698]: I0127 15:52:43.484322 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc"} err="failed to get container status \"ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc\": rpc error: code = NotFound desc = could not find container \"ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc\": container with ID starting with ffa2e48190f053dfe21e6a2013b88b554a29efa67546435192e1c9d24c5b00cc not found: ID does not exist" Jan 27 15:52:45 crc kubenswrapper[4698]: I0127 15:52:45.005691 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" path="/var/lib/kubelet/pods/f4975468-de80-48ef-be59-ba4cab2e70bc/volumes" Jan 27 15:52:51 crc kubenswrapper[4698]: I0127 15:52:51.643226 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wpr5c_1527e700-f26f-4281-a493-416b4e0ca5f9/nmstate-console-plugin/0.log" Jan 27 15:52:51 crc kubenswrapper[4698]: I0127 15:52:51.787305 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xcrf4_3964a93e-63fc-403a-875a-17ca1f14436e/nmstate-handler/0.log" Jan 27 15:52:51 crc kubenswrapper[4698]: I0127 15:52:51.914993 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dwhm6_0a0f5be8-0f28-4f33-8a36-7b0712476000/kube-rbac-proxy/0.log" Jan 27 15:52:51 crc kubenswrapper[4698]: I0127 15:52:51.941888 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dwhm6_0a0f5be8-0f28-4f33-8a36-7b0712476000/nmstate-metrics/0.log" Jan 27 15:52:52 crc kubenswrapper[4698]: I0127 15:52:52.079578 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-72zp5_88f47a43-00be-44e2-81e2-519428d78390/nmstate-operator/0.log" Jan 27 15:52:52 crc kubenswrapper[4698]: I0127 15:52:52.135611 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-pbvrj_413d3c2f-6ec7-4518-acbd-f811d0d54675/nmstate-webhook/0.log" Jan 27 15:52:57 crc kubenswrapper[4698]: I0127 15:52:57.452836 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:52:57 crc kubenswrapper[4698]: I0127 15:52:57.453399 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:05 crc kubenswrapper[4698]: I0127 15:53:05.512782 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-7xz26_49495d53-72e5-4381-bb5f-efb39b15a87a/prometheus-operator/0.log" Jan 27 15:53:05 crc kubenswrapper[4698]: I0127 15:53:05.693133 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-698f689c98-d5twk_44a8167e-8dc5-4360-a7b4-623198852230/prometheus-operator-admission-webhook/0.log" Jan 27 15:53:05 crc kubenswrapper[4698]: I0127 15:53:05.778128 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-698f689c98-qvbmr_4e0708a8-b876-48dd-8a58-35ba86739ddf/prometheus-operator-admission-webhook/0.log" Jan 27 15:53:05 crc kubenswrapper[4698]: I0127 15:53:05.903369 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-sxnvw_7b638465-7c66-495b-9dfd-1854fea80351/operator/0.log" Jan 27 15:53:06 crc kubenswrapper[4698]: I0127 15:53:06.012279 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-cl5tl_40335850-9929-4547-8b67-232394389f88/perses-operator/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.282284 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p9fxw_ce91ea9b-8af1-41dc-9104-7fb695211734/kube-rbac-proxy/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.434408 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p9fxw_ce91ea9b-8af1-41dc-9104-7fb695211734/controller/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.514605 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-frr-files/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.725495 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-reloader/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.725547 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-frr-files/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.725944 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-metrics/0.log" Jan 27 15:53:20 crc kubenswrapper[4698]: I0127 15:53:20.748895 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-reloader/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.211520 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-frr-files/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.239755 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-reloader/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.240193 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-metrics/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.351182 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-metrics/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.501203 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-frr-files/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.535489 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-reloader/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.547910 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/cp-metrics/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.566914 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/controller/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.754260 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/kube-rbac-proxy/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.756268 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/frr-metrics/0.log" Jan 27 15:53:21 crc kubenswrapper[4698]: I0127 15:53:21.762411 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/kube-rbac-proxy-frr/0.log" Jan 27 15:53:22 crc kubenswrapper[4698]: I0127 15:53:22.012061 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/reloader/0.log" Jan 27 15:53:22 crc kubenswrapper[4698]: I0127 15:53:22.042508 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jtlnl_c32a50d1-66c4-4627-924a-21d72a27b3d0/frr-k8s-webhook-server/0.log" Jan 27 15:53:22 crc kubenswrapper[4698]: I0127 15:53:22.312376 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54b596d688-wqmn2_ce6de35a-b78b-4e2e-87e0-30608c3ee8a6/manager/0.log" Jan 27 15:53:22 crc kubenswrapper[4698]: I0127 15:53:22.370489 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-54f6556b57-vxgv8_7ae0807b-48fe-425b-b7c7-72692c491175/webhook-server/0.log" Jan 27 15:53:23 crc kubenswrapper[4698]: I0127 15:53:23.109384 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wzjsq_09f854d2-d03b-492d-9e84-b6494a6f956a/kube-rbac-proxy/0.log" Jan 27 15:53:23 crc kubenswrapper[4698]: I0127 15:53:23.116125 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n5rs2_d9a1c6fd-85f4-40b3-aab1-7ac14d1f2f02/frr/0.log" Jan 27 15:53:23 crc kubenswrapper[4698]: I0127 15:53:23.373758 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wzjsq_09f854d2-d03b-492d-9e84-b6494a6f956a/speaker/0.log" Jan 27 15:53:27 crc kubenswrapper[4698]: I0127 15:53:27.452172 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:27 crc kubenswrapper[4698]: I0127 15:53:27.452822 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.350709 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/util/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.575035 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/pull/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.586324 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/pull/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.590397 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/util/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.784688 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/util/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.940831 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/extract/0.log" Jan 27 15:53:36 crc kubenswrapper[4698]: I0127 15:53:36.962082 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccjtc6_53da7be9-df28-4c12-ba5f-1c7db24893d3/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.116459 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/util/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.272700 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/util/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.304620 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.311451 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.447233 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/util/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.459400 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.464815 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xmt98_3cbf35ee-4485-4f1e-b68e-aaae5db51c59/extract/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.653840 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/util/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.824466 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.839180 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/pull/0.log" Jan 27 15:53:37 crc kubenswrapper[4698]: I0127 15:53:37.844777 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/util/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.031200 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/util/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.054303 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/pull/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.060734 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rjdhc_a902db54-8ee1-4cf9-a027-52e406f6c05b/extract/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.210207 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-utilities/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.353465 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-utilities/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.353479 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-content/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.385830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-content/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.525905 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-content/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.525916 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/extract-utilities/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.770827 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-utilities/0.log" Jan 27 15:53:38 crc kubenswrapper[4698]: I0127 15:53:38.979330 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-content/0.log" Jan 27 15:53:39 crc kubenswrapper[4698]: I0127 15:53:39.030036 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-utilities/0.log" Jan 27 15:53:39 crc kubenswrapper[4698]: I0127 15:53:39.137289 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-content/0.log" Jan 27 15:53:39 crc kubenswrapper[4698]: I0127 15:53:39.186620 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7n4t_e5abf3ba-ee72-4598-be31-5ab117f9b58b/registry-server/0.log" Jan 27 15:53:39 crc kubenswrapper[4698]: I0127 15:53:39.265418 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-content/0.log" Jan 27 15:53:39 crc kubenswrapper[4698]: I0127 15:53:39.289382 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/extract-utilities/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.013346 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8zkn8_287c4642-565c-4085-a7e0-31be12d876fe/marketplace-operator/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.335305 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-utilities/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.496126 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-utilities/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.553491 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-content/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.592248 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-content/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.666110 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4k8p2_585690e7-9038-411e-9d0f-7d74d57e72cd/registry-server/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.744639 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-utilities/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.791041 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/extract-content/0.log" Jan 27 15:53:40 crc kubenswrapper[4698]: I0127 15:53:40.875162 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-utilities/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.006561 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npzqg_bb36f5f4-2a47-4b47-873c-5029fcffc7f5/registry-server/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.070763 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-utilities/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.096966 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-content/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.139947 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-content/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.684303 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-utilities/0.log" Jan 27 15:53:41 crc kubenswrapper[4698]: I0127 15:53:41.691190 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/extract-content/0.log" Jan 27 15:53:42 crc kubenswrapper[4698]: I0127 15:53:42.452229 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccjnf_3521f3f3-5cfa-4614-9345-7a78f03ed2ce/registry-server/0.log" Jan 27 15:53:53 crc kubenswrapper[4698]: I0127 15:53:53.826525 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-698f689c98-qvbmr_4e0708a8-b876-48dd-8a58-35ba86739ddf/prometheus-operator-admission-webhook/0.log" Jan 27 15:53:53 crc kubenswrapper[4698]: I0127 15:53:53.844503 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-7xz26_49495d53-72e5-4381-bb5f-efb39b15a87a/prometheus-operator/0.log" Jan 27 15:53:53 crc kubenswrapper[4698]: I0127 15:53:53.869355 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-698f689c98-d5twk_44a8167e-8dc5-4360-a7b4-623198852230/prometheus-operator-admission-webhook/0.log" Jan 27 15:53:54 crc kubenswrapper[4698]: I0127 15:53:54.005695 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-sxnvw_7b638465-7c66-495b-9dfd-1854fea80351/operator/0.log" Jan 27 15:53:54 crc kubenswrapper[4698]: I0127 15:53:54.046741 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-cl5tl_40335850-9929-4547-8b67-232394389f88/perses-operator/0.log" Jan 27 15:53:57 crc kubenswrapper[4698]: I0127 15:53:57.452223 4698 patch_prober.go:28] interesting pod/machine-config-daemon-ndrd6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:57 crc kubenswrapper[4698]: I0127 15:53:57.452797 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:57 crc kubenswrapper[4698]: I0127 15:53:57.452857 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" Jan 27 15:53:57 crc kubenswrapper[4698]: I0127 15:53:57.453602 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d"} pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:53:57 crc kubenswrapper[4698]: I0127 15:53:57.453670 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" containerName="machine-config-daemon" containerID="cri-o://29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" gracePeriod=600 Jan 27 15:53:57 crc kubenswrapper[4698]: E0127 15:53:57.678569 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:53:58 crc kubenswrapper[4698]: I0127 15:53:58.216265 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e403fc5-7005-474c-8c75-b7906b481677" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" exitCode=0 Jan 27 15:53:58 crc kubenswrapper[4698]: I0127 15:53:58.216332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerDied","Data":"29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d"} Jan 27 15:53:58 crc kubenswrapper[4698]: I0127 15:53:58.216615 4698 scope.go:117] "RemoveContainer" containerID="156f30c1ec1a6beb6da54f6304a76d6701efde19274d84c3b1f081da77615216" Jan 27 15:53:58 crc kubenswrapper[4698]: I0127 15:53:58.217379 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:53:58 crc kubenswrapper[4698]: E0127 15:53:58.217676 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:54:12 crc kubenswrapper[4698]: I0127 15:54:12.992711 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:54:12 crc kubenswrapper[4698]: E0127 15:54:12.993565 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:54:23 crc kubenswrapper[4698]: I0127 15:54:23.991991 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:54:23 crc kubenswrapper[4698]: E0127 15:54:23.992758 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:54:38 crc kubenswrapper[4698]: I0127 15:54:38.992145 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:54:38 crc kubenswrapper[4698]: E0127 15:54:38.993000 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:54:53 crc kubenswrapper[4698]: I0127 15:54:53.992697 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:54:53 crc kubenswrapper[4698]: E0127 15:54:53.993384 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:55:08 crc kubenswrapper[4698]: I0127 15:55:08.992811 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:55:08 crc kubenswrapper[4698]: E0127 15:55:08.993713 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:55:20 crc kubenswrapper[4698]: I0127 15:55:20.993164 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:55:20 crc kubenswrapper[4698]: E0127 15:55:20.995055 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:55:31 crc kubenswrapper[4698]: I0127 15:55:31.346513 4698 generic.go:334] "Generic (PLEG): container finished" podID="62d864f5-c05f-4005-b941-24bf347a9068" containerID="ff88b02718a53830f4561648e9c2b7cbb7c7acb1647dbd03723304e934131131" exitCode=0 Jan 27 15:55:31 crc kubenswrapper[4698]: I0127 15:55:31.346604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwdch/must-gather-22hgz" event={"ID":"62d864f5-c05f-4005-b941-24bf347a9068","Type":"ContainerDied","Data":"ff88b02718a53830f4561648e9c2b7cbb7c7acb1647dbd03723304e934131131"} Jan 27 15:55:31 crc kubenswrapper[4698]: I0127 15:55:31.347728 4698 scope.go:117] "RemoveContainer" containerID="ff88b02718a53830f4561648e9c2b7cbb7c7acb1647dbd03723304e934131131" Jan 27 15:55:32 crc kubenswrapper[4698]: I0127 15:55:32.000350 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwdch_must-gather-22hgz_62d864f5-c05f-4005-b941-24bf347a9068/gather/0.log" Jan 27 15:55:33 crc kubenswrapper[4698]: I0127 15:55:33.991961 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:55:33 crc kubenswrapper[4698]: E0127 15:55:33.992509 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:55:39 crc kubenswrapper[4698]: I0127 15:55:39.934239 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwdch/must-gather-22hgz"] Jan 27 15:55:39 crc kubenswrapper[4698]: I0127 15:55:39.935011 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wwdch/must-gather-22hgz" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="copy" containerID="cri-o://15046951f8492415ea2c9e4f0b3a60deb49c2a9e5ddb27a7b4c404b44d5fad8b" gracePeriod=2 Jan 27 15:55:39 crc kubenswrapper[4698]: I0127 15:55:39.949277 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwdch/must-gather-22hgz"] Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.437790 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwdch_must-gather-22hgz_62d864f5-c05f-4005-b941-24bf347a9068/copy/0.log" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.438126 4698 generic.go:334] "Generic (PLEG): container finished" podID="62d864f5-c05f-4005-b941-24bf347a9068" containerID="15046951f8492415ea2c9e4f0b3a60deb49c2a9e5ddb27a7b4c404b44d5fad8b" exitCode=143 Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.438176 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b508e992da138108a1e38ad05234245513ef9a983bc893c63b6987764fc7370" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.493416 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwdch_must-gather-22hgz_62d864f5-c05f-4005-b941-24bf347a9068/copy/0.log" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.494564 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.633064 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbk7\" (UniqueName: \"kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7\") pod \"62d864f5-c05f-4005-b941-24bf347a9068\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.633138 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output\") pod \"62d864f5-c05f-4005-b941-24bf347a9068\" (UID: \"62d864f5-c05f-4005-b941-24bf347a9068\") " Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.641568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7" (OuterVolumeSpecName: "kube-api-access-qhbk7") pod "62d864f5-c05f-4005-b941-24bf347a9068" (UID: "62d864f5-c05f-4005-b941-24bf347a9068"). InnerVolumeSpecName "kube-api-access-qhbk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.735534 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbk7\" (UniqueName: \"kubernetes.io/projected/62d864f5-c05f-4005-b941-24bf347a9068-kube-api-access-qhbk7\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.805447 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "62d864f5-c05f-4005-b941-24bf347a9068" (UID: "62d864f5-c05f-4005-b941-24bf347a9068"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:40 crc kubenswrapper[4698]: I0127 15:55:40.837140 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62d864f5-c05f-4005-b941-24bf347a9068-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:41 crc kubenswrapper[4698]: I0127 15:55:41.006593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d864f5-c05f-4005-b941-24bf347a9068" path="/var/lib/kubelet/pods/62d864f5-c05f-4005-b941-24bf347a9068/volumes" Jan 27 15:55:41 crc kubenswrapper[4698]: I0127 15:55:41.452545 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwdch/must-gather-22hgz" Jan 27 15:55:47 crc kubenswrapper[4698]: I0127 15:55:47.992665 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:55:47 crc kubenswrapper[4698]: E0127 15:55:47.993820 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:56:02 crc kubenswrapper[4698]: I0127 15:56:02.992383 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:56:02 crc kubenswrapper[4698]: E0127 15:56:02.993257 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:56:15 crc kubenswrapper[4698]: I0127 15:56:15.992450 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:56:15 crc kubenswrapper[4698]: E0127 15:56:15.993112 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:56:20 crc kubenswrapper[4698]: I0127 15:56:20.124775 4698 scope.go:117] "RemoveContainer" containerID="15046951f8492415ea2c9e4f0b3a60deb49c2a9e5ddb27a7b4c404b44d5fad8b" Jan 27 15:56:20 crc kubenswrapper[4698]: I0127 15:56:20.146342 4698 scope.go:117] "RemoveContainer" containerID="ff88b02718a53830f4561648e9c2b7cbb7c7acb1647dbd03723304e934131131" Jan 27 15:56:20 crc kubenswrapper[4698]: I0127 15:56:20.172184 4698 scope.go:117] "RemoveContainer" containerID="b13b938b3449b773770a246601fc179695411d06426d968ad3b2c060cabd53ff" Jan 27 15:56:26 crc kubenswrapper[4698]: I0127 15:56:26.993128 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:56:26 crc kubenswrapper[4698]: E0127 15:56:26.994038 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:56:39 crc kubenswrapper[4698]: I0127 15:56:39.992626 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:56:39 crc kubenswrapper[4698]: E0127 15:56:39.993422 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:56:54 crc kubenswrapper[4698]: I0127 15:56:54.998891 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:56:55 crc kubenswrapper[4698]: E0127 15:56:54.999720 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:57:08 crc kubenswrapper[4698]: I0127 15:57:08.993774 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:57:08 crc kubenswrapper[4698]: E0127 15:57:08.994516 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:57:22 crc kubenswrapper[4698]: I0127 15:57:22.992761 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:57:22 crc kubenswrapper[4698]: E0127 15:57:22.993456 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.880866 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:29 crc kubenswrapper[4698]: E0127 15:57:29.882025 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="registry-server" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882043 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="registry-server" Jan 27 15:57:29 crc kubenswrapper[4698]: E0127 15:57:29.882081 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="gather" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882087 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="gather" Jan 27 15:57:29 crc kubenswrapper[4698]: E0127 15:57:29.882093 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="copy" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882099 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="copy" Jan 27 15:57:29 crc kubenswrapper[4698]: E0127 15:57:29.882117 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="extract-utilities" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882124 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="extract-utilities" Jan 27 15:57:29 crc kubenswrapper[4698]: E0127 15:57:29.882137 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="extract-content" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882143 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="extract-content" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882320 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="gather" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882347 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4975468-de80-48ef-be59-ba4cab2e70bc" containerName="registry-server" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.882356 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d864f5-c05f-4005-b941-24bf347a9068" containerName="copy" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.883844 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:29 crc kubenswrapper[4698]: I0127 15:57:29.906608 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.015269 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.015344 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjld\" (UniqueName: \"kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.015482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.118069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.118460 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.118571 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.118709 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnjld\" (UniqueName: \"kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.118946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.142530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnjld\" (UniqueName: \"kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld\") pod \"certified-operators-8m5v9\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.223932 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:30 crc kubenswrapper[4698]: I0127 15:57:30.792251 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:31 crc kubenswrapper[4698]: I0127 15:57:31.476312 4698 generic.go:334] "Generic (PLEG): container finished" podID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerID="f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac" exitCode=0 Jan 27 15:57:31 crc kubenswrapper[4698]: I0127 15:57:31.476415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerDied","Data":"f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac"} Jan 27 15:57:31 crc kubenswrapper[4698]: I0127 15:57:31.476577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerStarted","Data":"746c893b26031b5558cb789a2d0ef27c8e7d599f22adc96199b6c9cad2a2fb6a"} Jan 27 15:57:31 crc kubenswrapper[4698]: I0127 15:57:31.478882 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:57:33 crc kubenswrapper[4698]: I0127 15:57:33.494806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerStarted","Data":"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3"} Jan 27 15:57:34 crc kubenswrapper[4698]: I0127 15:57:34.504108 4698 generic.go:334] "Generic (PLEG): container finished" podID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerID="67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3" exitCode=0 Jan 27 15:57:34 crc kubenswrapper[4698]: I0127 15:57:34.504167 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerDied","Data":"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3"} Jan 27 15:57:34 crc kubenswrapper[4698]: I0127 15:57:34.993254 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:57:34 crc kubenswrapper[4698]: E0127 15:57:34.993853 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:57:35 crc kubenswrapper[4698]: I0127 15:57:35.514860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerStarted","Data":"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f"} Jan 27 15:57:35 crc kubenswrapper[4698]: I0127 15:57:35.534491 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8m5v9" podStartSLOduration=2.929657389 podStartE2EDuration="6.534469945s" podCreationTimestamp="2026-01-27 15:57:29 +0000 UTC" firstStartedPulling="2026-01-27 15:57:31.478452494 +0000 UTC m=+5307.155229959" lastFinishedPulling="2026-01-27 15:57:35.08326505 +0000 UTC m=+5310.760042515" observedRunningTime="2026-01-27 15:57:35.532286267 +0000 UTC m=+5311.209063772" watchObservedRunningTime="2026-01-27 15:57:35.534469945 +0000 UTC m=+5311.211247410" Jan 27 15:57:40 crc kubenswrapper[4698]: I0127 15:57:40.224656 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:40 crc kubenswrapper[4698]: I0127 15:57:40.225061 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:40 crc kubenswrapper[4698]: I0127 15:57:40.281005 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:40 crc kubenswrapper[4698]: I0127 15:57:40.607937 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:40 crc kubenswrapper[4698]: I0127 15:57:40.658631 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:42 crc kubenswrapper[4698]: I0127 15:57:42.575548 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8m5v9" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="registry-server" containerID="cri-o://5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f" gracePeriod=2 Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.047681 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.242345 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities\") pod \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.242429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content\") pod \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.242568 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjld\" (UniqueName: \"kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld\") pod \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\" (UID: \"7eedd5a9-a2b4-4c95-8953-b52f23517c32\") " Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.243435 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities" (OuterVolumeSpecName: "utilities") pod "7eedd5a9-a2b4-4c95-8953-b52f23517c32" (UID: "7eedd5a9-a2b4-4c95-8953-b52f23517c32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.255114 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld" (OuterVolumeSpecName: "kube-api-access-tnjld") pod "7eedd5a9-a2b4-4c95-8953-b52f23517c32" (UID: "7eedd5a9-a2b4-4c95-8953-b52f23517c32"). InnerVolumeSpecName "kube-api-access-tnjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.303471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eedd5a9-a2b4-4c95-8953-b52f23517c32" (UID: "7eedd5a9-a2b4-4c95-8953-b52f23517c32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.344803 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.344838 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eedd5a9-a2b4-4c95-8953-b52f23517c32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.344856 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnjld\" (UniqueName: \"kubernetes.io/projected/7eedd5a9-a2b4-4c95-8953-b52f23517c32-kube-api-access-tnjld\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.588099 4698 generic.go:334] "Generic (PLEG): container finished" podID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerID="5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f" exitCode=0 Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.588149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerDied","Data":"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f"} Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.589505 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m5v9" event={"ID":"7eedd5a9-a2b4-4c95-8953-b52f23517c32","Type":"ContainerDied","Data":"746c893b26031b5558cb789a2d0ef27c8e7d599f22adc96199b6c9cad2a2fb6a"} Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.588267 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m5v9" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.589628 4698 scope.go:117] "RemoveContainer" containerID="5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.634894 4698 scope.go:117] "RemoveContainer" containerID="67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.673484 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.705894 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8m5v9"] Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.739669 4698 scope.go:117] "RemoveContainer" containerID="f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.857338 4698 scope.go:117] "RemoveContainer" containerID="5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f" Jan 27 15:57:43 crc kubenswrapper[4698]: E0127 15:57:43.857713 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f\": container with ID starting with 5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f not found: ID does not exist" containerID="5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.857769 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f"} err="failed to get container status \"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f\": rpc error: code = NotFound desc = could not find container \"5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f\": container with ID starting with 5c0bb30d44bb447d63f5c8a1f7cdeeb7c63abdffb963cba446fb38c26d23853f not found: ID does not exist" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.857800 4698 scope.go:117] "RemoveContainer" containerID="67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3" Jan 27 15:57:43 crc kubenswrapper[4698]: E0127 15:57:43.858126 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3\": container with ID starting with 67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3 not found: ID does not exist" containerID="67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.858156 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3"} err="failed to get container status \"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3\": rpc error: code = NotFound desc = could not find container \"67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3\": container with ID starting with 67bd22856f23cb76881a618b1bdcd9ac03c6585da1a1a93d36b0fd8b9d5d91e3 not found: ID does not exist" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.858176 4698 scope.go:117] "RemoveContainer" containerID="f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac" Jan 27 15:57:43 crc kubenswrapper[4698]: E0127 15:57:43.858507 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac\": container with ID starting with f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac not found: ID does not exist" containerID="f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac" Jan 27 15:57:43 crc kubenswrapper[4698]: I0127 15:57:43.858529 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac"} err="failed to get container status \"f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac\": rpc error: code = NotFound desc = could not find container \"f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac\": container with ID starting with f56260e710d01b772e701d98e3026071faeb8494683619dce077f9d00d1050ac not found: ID does not exist" Jan 27 15:57:45 crc kubenswrapper[4698]: I0127 15:57:45.004439 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" path="/var/lib/kubelet/pods/7eedd5a9-a2b4-4c95-8953-b52f23517c32/volumes" Jan 27 15:57:46 crc kubenswrapper[4698]: I0127 15:57:46.993133 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:57:46 crc kubenswrapper[4698]: E0127 15:57:46.993811 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:57:58 crc kubenswrapper[4698]: I0127 15:57:58.992390 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:57:58 crc kubenswrapper[4698]: E0127 15:57:58.994051 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:58:09 crc kubenswrapper[4698]: I0127 15:58:09.992567 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:58:09 crc kubenswrapper[4698]: E0127 15:58:09.993322 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:58:22 crc kubenswrapper[4698]: I0127 15:58:22.992843 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:58:22 crc kubenswrapper[4698]: E0127 15:58:22.993630 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:58:36 crc kubenswrapper[4698]: I0127 15:58:36.993388 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:58:36 crc kubenswrapper[4698]: E0127 15:58:36.995419 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:58:47 crc kubenswrapper[4698]: I0127 15:58:47.991817 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:58:47 crc kubenswrapper[4698]: E0127 15:58:47.993272 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ndrd6_openshift-machine-config-operator(3e403fc5-7005-474c-8c75-b7906b481677)\"" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" podUID="3e403fc5-7005-474c-8c75-b7906b481677" Jan 27 15:59:01 crc kubenswrapper[4698]: I0127 15:59:01.991956 4698 scope.go:117] "RemoveContainer" containerID="29a10beaa8fe9f7dfe61227016053ea5ec1ff8dbf55e0d3deabed915be650a8d" Jan 27 15:59:02 crc kubenswrapper[4698]: I0127 15:59:02.376766 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ndrd6" event={"ID":"3e403fc5-7005-474c-8c75-b7906b481677","Type":"ContainerStarted","Data":"3cf66930799d2d5179708ce9d277488ba01479c2877304f3340e1947e70e4738"} Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.154805 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd"] Jan 27 16:00:00 crc kubenswrapper[4698]: E0127 16:00:00.155852 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="extract-utilities" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.155868 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="extract-utilities" Jan 27 16:00:00 crc kubenswrapper[4698]: E0127 16:00:00.155883 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="extract-content" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.155889 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="extract-content" Jan 27 16:00:00 crc kubenswrapper[4698]: E0127 16:00:00.155904 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="registry-server" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.155912 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="registry-server" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.156105 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eedd5a9-a2b4-4c95-8953-b52f23517c32" containerName="registry-server" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.156909 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.159242 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.159506 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.164380 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd"] Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.211049 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv4k\" (UniqueName: \"kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.211123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.211278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.312977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv4k\" (UniqueName: \"kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.313041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.313136 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.314080 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.321581 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.334613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv4k\" (UniqueName: \"kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k\") pod \"collect-profiles-29492160-xkkpd\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:00 crc kubenswrapper[4698]: I0127 16:00:00.496152 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:01 crc kubenswrapper[4698]: I0127 16:00:01.027791 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd"] Jan 27 16:00:02 crc kubenswrapper[4698]: I0127 16:00:02.003787 4698 generic.go:334] "Generic (PLEG): container finished" podID="2df34d16-b471-482a-9b21-22c7bd4eb8e5" containerID="f0eca37fb67e644efb0e8ad12d322f2b5296d78ccbdde4c80136350b0962bebd" exitCode=0 Jan 27 16:00:02 crc kubenswrapper[4698]: I0127 16:00:02.003855 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" event={"ID":"2df34d16-b471-482a-9b21-22c7bd4eb8e5","Type":"ContainerDied","Data":"f0eca37fb67e644efb0e8ad12d322f2b5296d78ccbdde4c80136350b0962bebd"} Jan 27 16:00:02 crc kubenswrapper[4698]: I0127 16:00:02.004102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" event={"ID":"2df34d16-b471-482a-9b21-22c7bd4eb8e5","Type":"ContainerStarted","Data":"986b1ad785537b7d235a88f9013014879102e041ea15529481c7bdfe19b472fb"} Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.355976 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.489573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume\") pod \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.490040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nv4k\" (UniqueName: \"kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k\") pod \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.490157 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume\") pod \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\" (UID: \"2df34d16-b471-482a-9b21-22c7bd4eb8e5\") " Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.490791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "2df34d16-b471-482a-9b21-22c7bd4eb8e5" (UID: "2df34d16-b471-482a-9b21-22c7bd4eb8e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.495468 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2df34d16-b471-482a-9b21-22c7bd4eb8e5" (UID: "2df34d16-b471-482a-9b21-22c7bd4eb8e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.497672 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k" (OuterVolumeSpecName: "kube-api-access-4nv4k") pod "2df34d16-b471-482a-9b21-22c7bd4eb8e5" (UID: "2df34d16-b471-482a-9b21-22c7bd4eb8e5"). InnerVolumeSpecName "kube-api-access-4nv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.593023 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nv4k\" (UniqueName: \"kubernetes.io/projected/2df34d16-b471-482a-9b21-22c7bd4eb8e5-kube-api-access-4nv4k\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.593067 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df34d16-b471-482a-9b21-22c7bd4eb8e5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4698]: I0127 16:00:03.593076 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df34d16-b471-482a-9b21-22c7bd4eb8e5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:04 crc kubenswrapper[4698]: I0127 16:00:04.022137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" event={"ID":"2df34d16-b471-482a-9b21-22c7bd4eb8e5","Type":"ContainerDied","Data":"986b1ad785537b7d235a88f9013014879102e041ea15529481c7bdfe19b472fb"} Jan 27 16:00:04 crc kubenswrapper[4698]: I0127 16:00:04.022181 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986b1ad785537b7d235a88f9013014879102e041ea15529481c7bdfe19b472fb" Jan 27 16:00:04 crc kubenswrapper[4698]: I0127 16:00:04.022209 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-xkkpd" Jan 27 16:00:04 crc kubenswrapper[4698]: I0127 16:00:04.590072 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9"] Jan 27 16:00:04 crc kubenswrapper[4698]: I0127 16:00:04.599709 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-nrql9"] Jan 27 16:00:05 crc kubenswrapper[4698]: I0127 16:00:05.003618 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e3c825-fe5e-4a75-b0ce-7134bc91a87e" path="/var/lib/kubelet/pods/20e3c825-fe5e-4a75-b0ce-7134bc91a87e/volumes" Jan 27 16:00:20 crc kubenswrapper[4698]: I0127 16:00:20.340170 4698 scope.go:117] "RemoveContainer" containerID="05292ef455532463d7d59b039187ad65596e628fe4c88ea1c803f406580c3c70"